Efficiency Tips for Basic R Loop



Similar documents
Notes on Factoring. MA 206 Kurt Bryan

Zabin Visram Room CS115 CS126 Searching. Binary Search

Data Mining: Algorithms and Applications Matrix Math Review

CHOOSING A COLLEGE. Teacher s Guide Getting Started. Nathan N. Alexander Charlotte, NC

Solving Quadratic & Higher Degree Inequalities

Introduction to the data.table package in R

VISUAL ALGEBRA FOR COLLEGE STUDENTS. Laurie J. Burton Western Oregon University

A permutation can also be represented by describing its cycles. What do you suppose is meant by this?

High School Algebra Reasoning with Equations and Inequalities Solve equations and inequalities in one variable.

CUDAMat: a CUDA-based matrix class for Python

Temperature Scales. The metric system that we are now using includes a unit that is specific for the representation of measured temperatures.

Using Excel for Data Manipulation and Statistical Analysis: How-to s and Cautions

Direct Methods for Solving Linear Systems. Matrix Factorization

Cryptography and Network Security Department of Computer Science and Engineering Indian Institute of Technology Kharagpur

Optimization of sampling strata with the SamplingStrata package

Using Excel for Statistical Analysis

Lecture 2 Mathcad Basics

University of Southern California Marshall Information Services

AN INTRODUCTION TO DIAMOND SCHEDULER

1. Classification problems

DERIVATIVES AS MATRICES; CHAIN RULE

Number Who Chose This Maximum Amount

Linear Codes. Chapter Basics

Homework Assignment 7

Study Guide 2 Solutions MATH 111

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Encoding Text with a Small Alphabet

1 Solving LPs: The Simplex Algorithm of George Dantzig

Tips for Solving Mathematical Problems

Chapter 6: Sensitivity Analysis

Appendix: Tutorial Introduction to MATLAB

Matrix Algebra in R A Minimal Introduction

Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

So let us begin our quest to find the holy grail of real analysis.

Report Paper: MatLab/Database Connectivity

Tutorial 3: Graphics and Exploratory Data Analysis in R Jason Pienaar and Tom Miller

Programming Languages & Tools

ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS

Row Echelon Form and Reduced Row Echelon Form

The PageRank Citation Ranking: Bring Order to the Web

Package retrosheet. April 13, 2015

Making Sense of the Mayhem: Machine Learning and March Madness

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Numbers 101: Growth Rates and Interest Rates

User Stories Applied

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Battleships Searching Algorithms

Top 5 best practices for creating effective dashboards. and the 7 mistakes you don t want to make

rm(list=ls()) library(sqldf) system.time({large = read.csv.sql("large.csv")}) # seconds, 4.23GB of memory used by R

Everything you wanted to know about using Hexadecimal and Octal Numbers in Visual Basic 6

Conquer the 5 Most Common Magento Coding Issues to Optimize Your Site for Performance

Linear Programming. March 14, 2014

Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011

VISUAL GUIDE to. RX Scripting. for Roulette Xtreme - System Designer 2.0

White Paper Performance Testing Methodology

Programming Exercise 3: Multi-class Classification and Neural Networks

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

OPRE 6201 : 2. Simplex Method

the points are called control points approximating curve

THE WHE TO PLAY. Teacher s Guide Getting Started. Shereen Khan & Fayad Ali Trinidad and Tobago

Finite Mathematics Using Microsoft Excel

COLLEGE ALGEBRA IN CONTEXT: Redefining the College Algebra Experience

Jeffrey Klow, Paul German, and Emily Andrulis

Cross Validation techniques in R: A brief overview of some methods, packages, and functions for assessing prediction models.

Solution of Linear Systems

Formulas, Functions and Charts

Question 1a of 14 ( 2 Identifying the roots of a polynomial and their importance )

Lecture Notes 2: Matrices as Systems of Linear Equations

Summation Algebra. x i

Introduction to Data Tables. Data Table Exercises

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

A form of assessment Visual, attractive and engaging. Poster presentations are... A conversation starter

Excel Guide for Finite Mathematics and Applied Calculus

Rethinking SIMD Vectorization for In-Memory Databases

The Running Time of Programs

2+2 Just type and press enter and the answer comes up ans = 4

2x 2x 2 8x. Now, let s work backwards to FACTOR. We begin by placing the terms of the polynomial inside the cells of the box. 2x 2

Getting Started with R and RStudio 1

WHAT IS THE CONFIGURATION TROUBLESHOOTER?

EXCEL PREREQUISITES SOLVING TIME VALUE OF MONEY PROBLEMS IN EXCEL

Return on Investment (ROI)

Gaming the Law of Large Numbers

CS 4204 Computer Graphics

If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C?

Computing Concepts with Java Essentials

Regular Languages and Finite Automata

Chapter 8: Bags and Sets

Tips for writing good use cases.

Preface of Excel Guide

8-5 Using the Distributive Property. Use the Distributive Property to factor each polynomial b 15a SOLUTION:

Useful Number Systems

MATLAB and Big Data: Illustrative Example

Introduction to parallel computing in R

DCOM Group Project 2: Usability Testing. Usability Test Report. Tim Harris, Zach Beidler, Sara Urner, Kacey Musselman

Data Visualization in Julia

Package CoImp. February 19, 2015

3.2 Roulette and Markov Chains

Tools for Excel Modeling. Introduction to Excel2007 Data Tables and Data Table Exercises

Transcription:

Efficiency Tips for Basic R Loop Svetlana Eden, Department of Biostatistics Vanderbilt University School of Medicine Contents 1 Efficiency Defined. 2 2 The Flexibility of R. 2 3 Vector Operations and Their Time Complexity. 4 4 Regular loop by-index and it s complexity. 6 5 Myth Busters: sapply() and its efficiency 8 6 Loops with quadratic time complexity. 9 6.1 Reguar loops by-name and their complexity.......................... 10 6.2 Loops with c(), cbind(), rbind()................................. 13 6.3 Loops and subsets........................................ 15 7 Summary 15 8 Example 15 8.1 Single record per ID....................................... 15 8.2 Multiple records per ID..................................... 17 9 Aknowledgements. 22 10 Appendix. Example Code. 23 List of Figures 1................................................... 5 2................................................... 7 3................................................... 8 4................................................... 12 5................................................... 14 List of Tables 1

2. The Flexibility of R. 1 Efficiency Defined. Code1 and Code2 (see below), perform the same computational task. Which code is more efficient? It s hard to say without defining efficiency first. Code 1. uniqueids = unique(data$id) for (bi in 1:bootTimes){ bootstrappedids = sample(x=uniqueids, size=length(uniqueids), replace=true) bootstrappeddata = data[data[[idvar]]==bootstrappedids[1],] for (i in 2:length(bootstrappedIDs)){ bootstrappeddata = rbind(bootstrappeddata, data[data$id==bootstrappedids[i],]) res = bootstrappeddata Code 2. indperid = tapply(1:nrow(data), data$id, function(x){x) lengthperid = sapply(indperid, length) for (bi in 1:bootTimes){ bootstrappedidindices = sample(x=1:length(indperid), size=length(indperid), replace=true) newdatalength = sum(lengthperid[bootstrappedidindices], na.rm=true) allbootstrappedindices = rep(na, newdatalength) endind = 0 for (i in 1:length(bootstrappedIDIndices)){ startind = endind + 1 endind = startind + lengthperid[bootstrappedidindices[i]] - 1 allbootstrappedindices[startind:endind] = indperid[[bootstrappedidindices[i]]] res = data[allbootstrappedindices, ] For some, Code1 is more efficient because it s shorter (we spent less time typing it). Or, if we are more interested in readability of the code we will rate them both as inefficient because of the tiny font. In the context of the talk, we agree that given that two pieces of code perform the same task, one code is more efficient than the other if it runs faster or uses less CPU time. Durting the talk we will evaluate the efficiency of different coding constructs in R. We will familiarize ourselves with the concept of time complexity a measurement of efficiency. Using this concept, we will compare the performance of several R loops, pointing out which coding constructs should be avoided to maximize efficiency. We will also look at an example of inefficient code and will work through how to significantly improve its efficiency. 2 The Flexibility of R. When we work with R, we have to choose a certain data structure. Our choices are: vector data.frame matrix list array The choice of the data structure is usually straight forward. For example, when we have a list of subject IDs, we can store it in a vector. When we have several subjects, and each one has a set of records, we can use a data.frame. When we need to store a variancecovariance matrix, we could use a data.frame, but a matrix would be a better choice 2

2. The Flexibility of R. because matrices come with convenient and fast matrix operations. And when we want to store everything in one object, the only data structure that can do this is a list. So inspite of many choices choosing a data structure in R is intuitive and easy. Having many choices may also be confusing. Suppose we are using a data.frame. > data id val 1 AAA 112 2 BBB 56 3 CCC 99 We would like to retrieve some data from it. More specifically, we want to get an element in the first row and in the first column, AAA. Retrieving an element from a data structure is called referencing. One way of referencing using the first column name ( id ) and the first row index (1) is : data$id[1] Because of R s flexibility we can accomplish the same task in multiple ways including: data$"id"[1] the same as the previous one but with quotes. Like the previous way, it doesn t allow to pass variable id as an argument to a function. data[["id"]][1] allows passing variable id to function, the same type of referencing, but too many brackets. data[1, "id"] same as the previous one, but less brackets (and therefore typing) data["1", "id"] referencing a row by name data["1", 1] referencing a row by name, and a column by index data[[1]][1] referencing a row and a column by index, with some extra brackets data[1, 1] same but less brackets (less typing) With all these choices at hand, it is our responsibility to choose the one that fits our needs. In the context of this talk, the way of referencing, by index or by name, makes a big difference. Referencing by name is provided by R to make our lives easier, because we remember names better than numbers. Indices are better for the computer. If the 3

3. Vector Operations and Their Time Complexity. computer knows the index, it can calculate the address of the data element and can access it very fast without searching. It takes longer for the computer to search for a name. If we compare different ways of referencing one element, we might not feel the difference, but the larger the data is, the more noticeable the difference becomes. 3 Vector Operations and Their Time Complexity. To process large data we need a loop. Suppose we have a vector: > vec = rep(1, 10) > vec [1] 1 1 1 1 1 1 1 1 1 1 And we would like to multiply each element by 2. One way of doing this is: > vec*2 [1] 2 2 2 2 2 2 2 2 2 2 In this example we did not reference each element of the vector. Performing arithmetic operations on a vector as a whole (without referencing each element) is called a vector operation. Vector operations might not look like it, but they are loops written in C, and hidden by R. They are the fastest loops in R. In order to show this we present a graph with X-axis displaying the length of the vector, and Y-axis displaying how much CPU time is spent on this vector operation: 4

3. Vector Operations and Their Time Complexity. Seconds 0.000 0.004 0.008 Vector operation 0 50000 100000 150000 200000 Vector Length Figure 1: Efficiency of vector operation as a function of vector length 5

4. Regular loop by-index and it s complexity. The graph shows CPU time as a function of the vector length. This function is monotonic: the longer the vector, the longer it takes to complete an operation on it. Note that the function is approximately linear. We will refer to this function as time complexity. To show that vector operations are the fastest loops in R, we compare their performance to other loops. Learning other ways of running a loop is important since vector operations cannot do everything. For example, if you try to compare (2.4 2.1) to 0.3 you will see the following: > (2.4-2.1) [1] 0.3 > > (2.4-2.1) ==.3 [1] FALSE This is a known problem of binary number representation. One way of handling it is: > istrue(all.equal(2.4-2.1, 0.3)) [1] TRUE This will not work on vectors unfortunately, so to implement a similar comparison for a vector we have to use a regular loop. 4 Regular loop by-index and it s complexity. Here is an example of a regular loop: vec = rep(1, n) for (i in 1:n){ vec[i] = vec[i]*2 Let s graphically compare time complexity of the regular loop and the vector operation: 6

4. Regular loop by-index and it s complexity. Seconds 0.0 0.2 0.4 0.6 Vector operation Regular loop 0 50000 100000 150000 200000 Vector Length Figure 2: Graphical comparison of efficiency of vector and regular loop As the graph shows, the performance of the vector operation is much better. We also notice that the regular loop s time complexity is approximately linear, similar to the vector operation. We will denote linear time complexity as O(n). 7

5. Myth Busters: sapply() and its efficiency 5 Myth Busters: sapply() and its efficiency R has many ways of making programming easier. One of them is function sapply(). It can be used with different data structures and plays a role of a regular loop. > vec = c(10, 20, 30, 40, 50) > sapply(vec, function(x){x*2) [1] 20 40 60 80 100 There is a myth among R users that sapply() when applied to a vector is faster than a regular loop. To check if this is true let s look at the graph. Seconds 0.0 0.2 0.4 0.6 0.8 0 50000 100000 150000 200000 Vector Length Figure 3: Graphical comparison of efficiency of regular loop and sapply() The myth about sapply() is dispelled by the graph, where the upper plot is the time of sapply() and the lower plot is the time for the regular loop. The victory is small, but the regular loop is still more efficient. 8

6. Loops with quadratic time complexity. 6 Loops with quadratic time complexity. So far we have seen loops with linear complexity. We learned that: vector operations are the fastest, but cannot perofrm all operations regular loops by-index are slower than vector operations, but they it can do everything sapply() applied to vectors doesn t improve efficiency Now let s talk about less efficient loops. The following example will help us understand why they are less efficient. As you might know, our department has a softball team. Let s imagine that at the end of each season our team holds a closing ceremony, during which our coach distributes gameballs to each player. The balls are named and distributed in the following way. The coach, Cole, is standing on one side of the field and the rest of the team stands on the other side. The team is so far away from the coach that he cannot see the faces, but he knows what player is standing where because he has a list with each player s name and his/her place in the line: Cathy - 3rd Mario - 8th Bryan - 2nd Sarah - 1st Jeff - 11th and so on... Let s calculate the complexity of this ceremony, but instead of time we will use number of throws made by the coach. Since Cole knows the players places, the complexity is linear since 9

6. Loops with quadratic time complexity. 6.1. Reguar loops by-name and their complexity Cathy is 3rd - one throw Mario is 8th - one throw Bryan is 2nd - one throw Sarah is 1st - one throw Jeff is 11th - one throw and so on Therefore, the complexity = 1 + 1 + 1 + 1 + 1 +... = O(n) 6.1 Reguar loops by-name and their complexity Now, suppose that coach Cole could not make it to the ceremony. The league provided the team with a substitute coach. The substitute coach was not familiar with the ceremony. He knew that he had to distribute softballs, but didn t know that they were signed and he didn t have a list. Since he didn t know who is where, he decided to systematically throw each ball to the first player in line. If the ball didn t belong to the player, the player would throw it back to the coach. For example, if the ball had Cathy s name on it, the coach would throw it to Sarah, and Sarah would throw it back to him. Then he would throw it to Bryan, and Bryan would throw it back. Then the coach would throw it to Cathy, and she would keep the ball. Next, if the ball had Mario s name on it, the coach would start with Sarah again, and he throw the ball to everybody in the line until it finally got to Mario. And so on. Let s calculate the complexity of this ceremony. Cathy is 3rd - three throws Mario is 8th - eight throws Bryan is 2nd - two throws 10

6. Loops with quadratic time complexity. 6.1. Reguar loops by-name and their complexity Sarah is 1st - one throw Jeff is 11th - eleven throws and so on Overall 3 + 8 + 2 + 1 + 11 +... = 1 + 2 + 3 + 4 +... = n(n + 1)/2 = O(n 2 ), the complexity is quadratic. How is this example related to our talk? As we mentioned at the beginning, R provides us with several ways of referencing data elements - by index and by name. We can also run a loop by index or by name. Earlier we saw an example of a loop by index: vec = rep(1, n) for (i in 1:n){ vec[i] = vec[i]*2 A loop by name looks similar: > vec = rep(1, n) > > names(vec) = paste("name", 1:length(vec), sep="") > > vec name1 name2 name3 name4 name5 name6 1 1 1 1 1 1 > for (n in names(vec)){ + vec[n] = vec[n]*2 + > vec name1 name2 name3 name4 name5 name6 2 2 2 2 2 2 As you can see, the only visible difference between these two loops is how the element in the vector is referenced. Names are convenient for people, but not as much for the computer. To access an element by name, the computer has to look for it, like in the example with the softball ceremony. The substitute coach had to start from the beginning of the line to find the right person, similarly, the computer has to start from the beginning of the data structure to find an element with the correct name. As a result, the complexity of the loop by name is no longer linear, it s quadratic: 1 + 2 + 3 + 4 +... = n(n + 1)/2 = O(n 2 ) The difference in efficiency between the two loops, by index and by name, is demonstrated in the following graph. 11

6. Loops with quadratic time complexity. 6.1. Reguar loops by-name and their complexity Seconds 0.0 1.0 2.0 3.0 Regular loop Loop by name 0 1000 2000 3000 4000 5000 Vector Length Figure 4: Graphical comparison of efficiency of regular loop and loop by name 12

6. Loops with quadratic time complexity. 6.2. Loops with c(), cbind(), rbind() 6.2 Loops with c(), cbind(), rbind() More often than loop by name, we use loops with functions c(), cbind(), or rbind(). For example: vec = c() for (i in 1:n){ vec = c(vec, 2) The code shows that we start with an empty vector and we grow this vector inside the loop. During each iteration i of the loop, the computer erases the previous vector of length i-1 and creates a new vector of length i. This process is illustrated by the following diagram: As seen from the diagram, the computer works harder and harder on each iteration of the loop. Its complexity is 1 + 2 + 3 + 4 +... = n(n + 1)/2 = O(n 2 ). To what degree this loop is less efficient than a loop by index is shown in the following graph: 13

6. Loops with quadratic time complexity. 6.2. Loops with c(), cbind(), rbind() Seconds 0.00 0.02 0.04 0.06 0.08 Loop with c() Regular loop 0 1000 2000 3000 4000 5000 Vector Length Figure 5: Graphical comparison of efficiencly of regular loop and loop with c() 14

8. Example 6.3. Loops and subsets Let s also see the difference in efficiency between these two loops for several vector lengths: Vector length Regular loop c() loop 10,000 0 sec. 0.3 sec. 100,000 0.3 sec. 28 sec. 1,000,000 2.9 sec. 46 min. 10,000,000 29 sec. 3 days 100,000,000 5 min. 318 days Similar to c(), using functions cbind() or rbind() inside a loop to grow a data structure leads to quadratic complexity and should be avoided. 6.3 Loops and subsets Suppose we write a loop that iterates on subject ID, and on each iteration it subsets the data frame on the current ID. Such loops will also have quadratic complexity, because when we subset a data frame, either by using subset(data, id==333) or by using data[data$id==333, ] the computer has to search all IDs on each iteration making sure it found every single one, which will result in complexity of n + n + n + n +... = n n = O(n 2 ) 7 Summary Vector operations are the fastest, but cannot perform all operations. Loops by index are not as fast, but can do everything. sappy() on vectors is less efficient than loops by index. Loops by name can be handy, but should be avoided for large data Loops with c(), cbind(), or rbind() should be avoided for large data. Loops with subset(data, id==333), or data[data$id==333, ] should be avoided for large data. 8 Example 8.1 Single record per ID Let s see an example of how to improve the efficiency of a loop. Suppose we have a data frame. 15

8. Example 8.1. Single record per ID > data = data.frame(id=c("aaa", "BBB", "CCC"), val=c(112, 56, 99)) > data id val 1 AAA 112 2 BBB 56 3 CCC 99 We would like to bootstrap its records. In other words, we would like to get a random sample with replacement of the data frame records, and this random sample should have the same number of rows as the original data frame. One way of doing this is to sample the IDs, then to subset the data frame on these IDs, and finally to put all sampled data records together. > set.seed(1) > sampledids = sample(data$id, size = length(data$id), replace = TRUE) > sampledids [1] AAA BBB BBB > bootresult = subset(data, id %in% sampledids[1]) > bootresult id val 1 AAA 112 > bootresult = rbind(bootresult, subset(data, id %in% sampledids[2])) > bootresult id val 1 AAA 112 2 BBB 56 > bootresult = rbind(bootresult, subset(data, id %in% sampledids[2])) > bootresult id val 1 AAA 112 2 BBB 56 21 BBB 56 Using a regular loop we get the following: bootids = sample(x=unique(data$id), size=length(uniqueids), replace=true) 16

8. Example 8.2. Multiple records per ID bootdata = data[data$id==bootids[1],] for (i in 2:length(bootIDs)){ addid = subset(data, id==bootids[i]) bootdata = rbind(bootdata, addid) This loop will work correctly, but, as we learned earlier, inefficiently because of functions subset() and rbind(). Luckily, there is a more efficient way to do the same task. We can sample the indices of the original data frame and use them to retrieve its records. We can do this because each ID in the data frame has a corresponding index: After sampling the indices: > set.seed(1) > sampledindices = sample(1:nrow(data), size = nrow(data), replace = TRUE) > sampledindices [1] 1 2 2 We can retrieve the corresponding records: > data[sampledindices,] id val 1 AAA 112 2 BBB 56 2.1 BBB 56 8.2 Multiple records per ID Now, to make things more challenging, let s consider an example with several records per ID: 17

8. Example 8.2. Multiple records per ID > data id val 1 AAA 112 2 AAA 113 3 BBB 56 4 BBB 57 5 BBB 58 6 CCC 99 We would like to generate a boostrap sample of this data with the same number of IDs, and each ID has to have the same records as in the original data frame. Let s try to use our fast algorithm and simply bootstrap by index: > sampledindices = sample(1:nrow(data), size = nrow(data), replace = TRUE) > sampledindices [1] 2 3 4 6 2 6 > data[sampledindices,] id val 2 AAA 113 3 BBB 56 4 BBB 57 6 CCC 99 2.1 AAA 113 6.1 CCC 99 This did not work correctly: in the resulting data frame, ID BBB has only two records instead of three, and ID AAA doesn t have the same records as in the original data frame. This happened because when we simply sample indices we don t take into account that they belong to a certain ID. The first method works correctly (but inefficiently): bootids = sample(x=unique(data$id), size=length(uniqueids), replace=true) bootdata = data[data$id==bootids[1],] for (i in 2:length(bootIDs)){ addid = subset(data, id==bootids[i]) bootdata = rbind(bootdata, addid) > bootresult 18

8. Example 8.2. Multiple records per ID id val 1 AAA 112 2 AAA 113 3 BBB 56 4 BBB 57 5 BBB 58 31 BBB 56 41 BBB 57 51 BBB 58 Note, that if the number of records were the same for each ID, we could reshape our data into a wide format and bootstrap it by index. In our case the number of records is different for each ID, and instead of reshaping, we would like to apply the idea of sampling indices instead of IDs. To achieve this we will create a data structure, in which each ID has a single unique index, and at the same time this data structure will keep all indices for each ID: > ### bootstrap preparation > indperid = tapply(1:nrow(data), data$id, function(x){x) > indperid $AAA [1] 1 2 $BBB [1] 3 4 5 $CCC [1] 6 Now we can sample the indices of this data structure: > set.seed(1) > bootidindices = sample(x=1:length(indperid),size=length(indperid), replace=true) 19

8. Example 8.2. Multiple records per ID > bootidindices [1] 1 2 2 Next we gather the original indices of each ID using the following loop, > newdatalength = sum(lengthperid[bootidindices], na.rm=true) > lengthperid = sapply(indperid, length) > allbootindices = rep(na, newdatalength) > endind = 0 > for (i in 1:length(bootIDIndices)){ startind = endind + 1 endind = startind + lengthperid[bootidindices[i]] - 1 allbootindices[startind:endind] = indperid[[bootidindices[i]]] print(allbootindices) [i=1] 1 2 NA NA NA NA NA NA [i=2] 1 2 3 4 5 NA NA NA [i=3] 1 2 3 4 5 3 4 5 and retrieve the corresponding rows from the original data frame: > data[allbootindices, ] id val 1 AAA 112 2 AAA 113 3 BBB 56 4 BBB 57 5 BBB 58 3.1 BBB 56 4.1 BBB 57 5.1 BBB 58 This idea is illustrated in the following diagram: 20

8. Example 8.2. Multiple records per ID Using indices, allows us to improve efficiency of the code bootids = sample(x=unique(data$id), size=length(uniqueids), replace=true) bootdata = data[data$id==bootids[1],] for (i in 2:length(bootIDs)){ addid = subset(data, id==bootids[i]) bootdata = rbind(bootdata, addid) by replacing subset() and rbind(), with a loop, where every element of every data structure is referenced by index: bootstrappedidindices = sample(x=1:length(indperid), size=length(indperid), replace=true) allbootstrappedindices = rep(na, newdatalength) endind = 0 for (i in 1:length(bootIDIndices)){ startind = endind + 1 endind = startind + lengthperid[bootidindices[i]] - 1 allbootindices[startind:endind] = indperid[[bootidindices[i]]] Was this worth the effort though? The following table presents processing times for both methods. These times were computed for 200 bootstrap replications and for data frames with random number of records per ID. The codes for both methods can be found in the Appendix. Num. of IDs First method Improved method 100 16 sec. 0.4 sec. 200 3 min. 2.6 sec. 400 49 min. 22 sec. 1000 8.6 hours 2.2 min 21

9. Aknowledgements. 9 Aknowledgements. I would like to thank Cathy Jenkins, Chris Fonnesbeck, Terri Scott, Jonathan Schildcrout, Thomas Dupont, Jeff Blume, and Frank Harrell for their help and useful feedback. 22

10. Appendix. Example Code. 10 Appendix. Example Code. The following functions generate bootstrap samples by ID. The functions are presented in the order of their efficiency. Note that to use these efficiently in practice, it is better to include your calculations in the function (for example point estimate of the parameter of interest for each bootstrap sample), and return the calculated values. ### Least Efficient: #################### bootstrapbyid = function(data, idvar, boottimes=200){ data = data[!is.na(data[[idvar]]),] uniqueids = unique(data[[idvar]]) for (bi in 1:bootTimes){ bootstrappedids = sample(x=uniqueids, size=length(uniqueids), replace=true) bootstrappeddata = data[data[[idvar]]==bootstrappedids[1],] for (i in 2:length(bootstrappedIDs)){ bootstrappeddata = rbind(bootstrappeddata, data[data[[idvar]]==bootstrappedids[i],]) res = bootstrappeddata ### More Efficient: ################### bootstrapbyindexwithunlist = function(data, idvar, boottimes=200){ data = data[!is.na(data[[idvar]]),] indperid = tapply(1:nrow(data), data[[idvar]], function(x){x) lengthperid = sapply(indperid, length) for (bi in 1:bootTimes){ bootstrappedidindices = sample(x=1:length(indperid), size=length(indperid), replace=true) res = data[unlist(indperid[bootstrappedidindices]), ] ### Most efficient: ################### bootstrapbyindex = function(data, idvar, boottimes=200){ data = data[!is.na(data[[idvar]]),] indperid = tapply(1:nrow(data), data[[idvar]], function(x){x) lengthperid = sapply(indperid, length) for (bi in 1:bootTimes){ bootstrappedidindices = sample(x=1:length(indperid), size=length(indperid), replace=true) newdatalength = sum(lengthperid[bootstrappedidindices], na.rm=true) allbootstrappedindices = rep(na, newdatalength) endind = 0 for (i in 1:length(bootstrappedIDIndices)){ startind = endind + 1 endind = startind + lengthperid[bootstrappedidindices[i]] - 1 allbootstrappedindices[startind:endind] = indperid[[bootstrappedidindices[i]]] res = data[allbootstrappedindices, ] 23