CHAPTER 16. Known-fate models The Kaplan-Meier Method

Save this PDF as:

Size: px
Start display at page:

Download "CHAPTER 16. Known-fate models. 16.1. The Kaplan-Meier Method"

Transcription

1 CHAPTER 16 Known-fate models In previous chapters, we ve spent a considerable amount of time modeling situations where the probability of encountering an individual is less than 1. However, there is at least one situation where we do not have to model detection probability known-fate data, so-called because we know the fate of each marked animal with certainty. In other words, encounter probability is 1.0 (which must be true if we know the fate of a marked individual with certainty). This situation typically arises when individuals are radio-marked, although certain kinds of plant data can also be analyzed with the known fate data type. In such cases, known-fate data are important because they provide a theory for estimation of survival probability and other parameters (such as emigration). The focus of known fate models is the estimation of survival probability S, the probability of surviving an interval between sampling occasions. These are models where it can be assumed that the sampling probabilities are 1. That is, the status (dead or alive) of all tagged animals is known at each sampling occasion. For this reason, precision is typically quite high, as precise as the binomial distribution allows, even in cases where sample size is often fairly small. The only disadvantages might be the cost of radios and possible effects of the radio on the animal or its behavior. The model is a product of simple binomial likelihoods. Data on egg mortality in nests and studies of sessile organisms, such as mollusks, have also been modeled as known fate data. In fact, the known fate data type is exactly the same as logistic regression in any statistical package. The main advantage of using MARK for known fate analysis is the convenience of model selection, and the capabilities to model average survival estimates easily, and compute random effects estimates The Kaplan-Meier Method The traditional starting point for the analysis of known-fate data is the Kaplan-Meier (1958) we ll discuss it briefly here, before introducing a more flexible approach that will serve as the basis for the rest of this chapter. The Kaplan-Meier (hereafter, K-M) estimator is based on observed data at a series of occasions, where animals are marked and released only at occasion 1. The K-M estimator of the survival function is Ŝ t t ( ) ni d i n i 1 i where n i is the number of animals alive and at risk of death at occasion i (given that their fate is known at the end of the interval), d i is the number known dead at occasion i, and the product is over i up to Cooch & White (2015)

2 16.1. The Kaplan-Meier Method 16-2 the tth occasion (this estimator is often referred to as the product-limit estimator). Critical here is that n i is the number known alive at the start of occasion i and whose fate (either alive or dead) is known at the end of the interval. Thus, the term in parentheses is just the estimated survival for interval i. Note that n i does not include individuals censored during the interval. It is rare that a survival study will observe the occasion of death of every individual in the study. Animals are lost (i.e., censored) due to radio failure or other reasons. The treatment of such censored animals is often important, but often somewhat subjective. These K-M estimates produce a survival function (see White and Garrott 1990); the cumulative survival up to time t. This is a step function and is useful in comparing, for example, the survival functions for males vs. females. If there are no animals that are censored, then the survival function (empirical survival function or ESF) is merely, ( ) number alive longer than t Ŝ t for t 0 n This is the same as the intuitive estimator where no censoring is occurring: Ŝt n t+1 /n t ; for example, Ŝ 2 n 3 /n 2. The K-M method is an estimate of this survival function in the presence of censoring. Expressions for the variance of these estimates can be found in White and Garrott (1990). A simple example of this method can be illustrated using the data from Conroy et al. (1989) on 48 radio-tagged black ducks. The data are survived to occasion week number alive at start number dying number alive at end number censored Here,the number alive at the start of an interval are to known be alive at the start of sampling occasion j. This is equivalent to being alive at the start of interval j. For example, 47 animals are known to be alive at the beginning of occasion 2. Forty-five are alive at the start of interval 3, but 4 are censored from these 45 because their fate is unknown at the end of the interval, so that n A further example is that 34 ducks survived to the start of occasion 5. Thus, the MLEs are Ŝ 1 47/ Ŝ 2 45/ Ŝ 3 39/ Ŝ 4 34/ Ŝ5 28/ Ŝ 6 25/ Ŝ 7 24/ Ŝ 8 24/ (note: only 41 because 4 were censored) (note: only 32 because 2 were censored) Here one estimates 8 parameters call this models(t). One could seek a more parsimonious model in several ways. First, perhaps all the parameters were nearly constant; thus a model with a single survival probability might suffice (i.e., S(.)) If something was known about the intervals (similar to the flood years for the European dipper data) one could model these with one parameter and denote the

3 16.2. The Binomial Model 16-3 other periods with a second survival parameter. Finally, one might consider fitting some smooth function across the occasions and, thus, have perhaps only one intercept and one slope parameter (instead of 8 parameters). Still other possibilities exist for both parsimonious modeling and probable heterogeneity of survival probability across animals. These extensions are not possible with the K-M method and K-L-based (i.e., AIC) model selection is not possible. To do this, we need an approach based on maximum likelihood estimation as it turns out, the simple binomial model will do just that for known-fate data The Binomial Model In the K-M approach, we estimated the survival probability by Ŝ t t ( ) ni d i i 1 where d i is the number dying over the ith interval, and n i is the number alive ( at risk ) at the start of the interval and whose fate is also known at the end of the interval (i.e., not censored). Here, we use the equivalence (under some conditions) of the K-M estimator, and a binomial estimator, to recast the problem in a familiar likelihood framework. Consider the situation for the case in which all animals are released at some initial time t 0, and there is no censoring. If we expand the product term from the preceding equation, over the interval [0, t], ( )( ) ( ) n0 d 0 n1 d 1 nt d t Ŝ t... n 0 n 1 We notice that in the absence of censoring (which we assume for the moment), the number of animals at risk at the start of an interval is always the previous number at risk, minus the number that died the previous interval. Thus, we can re-write the expanded expression as ( )( ) ( ) n0 d 0 n1 d 1 nt d t Ŝ t... n 0 n 1 ( )( n0 d 0 n0 ( )) ( d 0 + d 1 n0 ( )) d 0 + d 1 + d 2 n 0 n 0 d 0 n 0 ( )... d 0 + d 1 ( n0 ( ) ) d 0 + d d t n 0 ( d 0 + d d t 1 ) OK, while this looks impressive, its importance lies in the fact that it can be easily simplified to ( )( n0 d 0 n0 ( )) ( d 0 + d 1 n0 ( ) ) d 0 + d d t Ŝ t n 0 n 0 d 0 n 0 ( ) d 0 + d d t 1 ( n0 ( )) d 0 + d d t n 0 n t n i n t

4 16.2. The Binomial Model 16-4 If you look at this expression closely, you ll see that the numerator is the number of individuals from the initial release cohort (n 0 ) that remain alive (i.e., which do not die the number that die is given by (d 0 + d 1 + +d t )), divided by the number initially released. In other words, the estimate of survival to time t is simply the number surviving to time t, divided by the number released at time 0. Now, this should sound familiar hopefully you recognize it as the usual binomial estimator for survival as (in this case) number of survivors ( successes, in a literal sense) in n 0 trials. Thus, if Ŝ i y i n i where y i is the number surviving to time i (on the interval [i 1, i]), and n i is the number alive ( at risk ) at the start of the interval (i.e., at time i), then we can write Ŝ t t ( ) ni d i i 1 n i If you recall the brief introduction to likelihood theory in Chapter 1 (especially the section discussing the binomial), it will be clear that the likelihood expression for this equation is L ( θ n i, y i ) t i 1 t i 1 ( yi n i ) S y ( ) i (ni y 1 Si i ) i whereθis the survival model for the t intervals, n i is the number of individuals alive (at risk) during each interval, y i is the number surviving each interval, and S i is the MLE of survival during each interval. As suggested at the start of this section, the binomial model allows standard likelihood-based estimation and is therefore similar to other models in program MARK. To understand analysis of known-fate data using the binomial model in MARK, we first must understand that there are 3 possible scenarios under the known fate model. In a known-fate design, each tagged animal either: 1. survives to end of study (detected at each sampling occasion so fate is known on every occasion) 2. dies sometime during the study (its carcass is found on the first occasion after its death so that its fate is known) 3. survives up to the point where its fate is last known, at which time it is censored the fate is known Note, for purposes of estimating survival probabilities, there is no difference between animals seen alive and then removed from the population at occasion k and those censored due to radio failure or for other reasons. The binomial model assumes that the capture histories are mutually exclusive and that animals are independent, and that all animals have the same underlying survival probability when individuals are modeled with the same survival parameter (homogeneity across individuals). Known fate data can be modeled by a product of binomials. Let us reconsiderthe black duck data (seen previously),using the binomial model framework: n 1 48, and n Thus, the likelihood is L ( ( ) S ) n1 1 n 1, n 2 n 2 S n 2 1 ( ) ( ) (n1 n 1 S1 2 ) S 44 ( ) (48 44) 1 1 S1

5 16.3. Encounter Histories 16-5 Clearly, one could find the MLE, Ŝ1, for this expression (e.g., Ŝ1 44/ ). We could also easily derive an estimate of the variance (see section in Chapter 1). Of course, the other binomial terms are multiplicative, assuming independence. The survival during the second interval is based on n 2 44 and n 3 41, L ( ( ) S ) n1 1 n 1, n 2 n 2 S n 2 2 ( ) ( ) (n1 n 1 S2 2 ) S 41 ( ) (44 41) 2 1 S2 As noted above, the likelihood function for the entire set of black duck data (modified to better make some technical points below) is the product of these individual likelihoods Encounter Histories Proper formatting of encounter histories for known-fate data is critical, and is structurally analogous to the LDLD format used in some other analyses (e.g., Burnham s live encounter-dead recovery model) these are discussed more fully in Chapter 2. For the encounter histories for known-fate data, each entry is paired, where the first position (L) is a1if the animal is known to be alive at the start of occasion j; that is, at the start of the interval. A 0 in this first position indicates the animal was not yet tagged or otherwise not known to be alive at the start of the interval j or else its fates is not known at the end of the interval (and thus the animal is censored and is not part of the estimation during the interval). The second position (D) in the pair is 0 if the animal survived to the end of the interval. It is a 1 if it died sometime during the interval. As the fate of every animal is assumed known at every occasion, the sampling probabilities (p) and reporting probabilities (r) are 1. The following examples will help clarify the coding: encounter history probability interpretation S 1 S 2 S 3 S 4 tagged at occasion 1 and survived until the end of the study S 1 S 2 (1 S 3 ) tagged at occasion 1, known alive during the second interval, and died during the third interval S 1 (1 S 2 ) tagged at occasion 1 and died during the second interval (1 S 1 ) tagged at occasion 1 and died during the first interval S 1 S 4 tagged at occasion 1, censored for interval 2 and 3 (not detected, or removed for some reason), and reinserted into the study at occasion S 3 (1 S 4 ) tagged at occasion 3, died during the 4th interval S 1 tagged at occasion 1, known alive at the end of the first interval, but not released at occasion 2 and therefore censored after the first interval Estimation of survival probabilities is based on a release (1) at the start of an interval and survival to the end of the interval (0), mortality probabilities are based on a release (1) and death (1) during the interval; if the animal then was censored, it does not provide information about S i or 1 S i ).

6 16.4. Worked example: black duck survival 16-6 Some rules for encounter history coding for known-fate analysis: a. The two-digit pairs each pertain to an interval (the period of time between occasions). b. There are only 3 possible entries for each interval: 10 = an animal survived the interval, given it was alive at the start of the interval 11 = an animal died during the interval, given it was alive at the start of the interval 00 = an animal was censored for this interval c. In order to know the fate of an animal during an interval, one must have encountered it both at the beginning and the end of the interval Worked example: black duck survival Here, we consider the black duck radio-tracking data from Conroy et al. (1989). These data are contained in the BLCKDUCK.INP file contained in the MARK examples subdirectory that is created when you install MARK on your computer. The data consists of 50 individual encounter histories, 8 encounter occasions, 1 group, and 4 individual covariates: age (0 = subadult, 1 = adult), weight (kg), wing (length, in cm) and condition. In this study, it was suspected that variation in body size, condition (or both) might significantly influence survival, and that the relationship between survival and these covariates might differ between adult and subadult ducks. Here is what a portion of the BLCKDUCK.INP file looks like, showing the encounter histories and covariate values for the first 10 individuals: For example, the 10 th individual in the data set has the encounter history , meaning: marked and released alive at the start of the first interval, was detected alive at the start of the second interval, and then died during the third interval. The individual was radio-marked as an adult, and weighed 1.42 kilograms, had a wing length of 27.0 cm, and a condition index of Ah but look carefully notice that in this.inp file, age is not coded as a classification variable (as is

7 16.4. Worked example: black duck survival 16-7 typically done for groupings of individuals see Chapter 2), but instead as a dichotomous covariate. Coding dichotomous groups as simple linear covariates is perfectly acceptable sometimes it is more straightforward to implement the only cost (for large data sets) might be efficiency (the numerical estimation can sometimes be slower using this approach). However, the advantage of coding age as an individual covariate is that if age turns out to be not important, then you are not required to manipulate PIMs for 2 age groups. Obviously, this has some implications for how we specify this data set in MARK. Start a new project in MARK. Select known fates as the data type. Enter 8 encounter occasions. Now, the trick is to remember that even though there are two age groups in the data, we re coding age using an individual covariate as such, there is still only 1 attribute group, not two. So, leave attribute groups at the default of 1. For individual covariates, we need to tell MARK that the input file has 4 covariates which we ll label as age, weight, wing, and cond (for condition), respectively. Once we ve specified the data type, we ll proceed to fit a series of models to the data. Let s consider models S t, S a ge, S., S wei ght, S win g, and S cond. Clearly, the most parameterized of these models is model S t, so we ll start with that. Here is the PIM for survival:

10 16.5. Pollock s staggered entry design Pollock s staggered entry design The usual application of the Kaplan-Meier method assumes that all animals are released at occasion 1 and they are followed during the study until they die or are censored. Often new animals are released at each occasion (say, weekly); we say this entry is staggered (Pollock et al. 1989). Assume, as before, that animals are fitted with radios and that these do not affect the animal s survival probability. This staggered entry fits easily into the K-M framework by merely redefining the n i to include the number of new animals released at occasion i. Therefore, conceptually, the addition of new animals into the marked population causes no difficulties in data analysis. But, you might be wondering how you handled staggered entry designs in MARK after all, how do you handle more than one cohort, if the survival PIM has only one row? If you think that the survival of the newly added animals is identical to the survival of animals previously in the sample, then you can just include the new animals in the encounter histories file with pairs of 00 LD codes prior to when the animal was captured and first put into the sample. But what if you think that the newly added animals have different survival. Obviously, you need more rows. How? As it turns out, there is a straightforward and fairly intuitive way to tweak the known-fate data type (in this case, allowing it to handle staggered entry designs) you simply add in additional groups for each release occasion (each release cohort), thus allowing cohort-specific survival probabilities. For this to work, you need to fix the survival probabilities for these later cohorts prior to their release to 1, because there is no data available to estimate these survival rates. With multiple groups representing different cohorts, analyses equivalent to the upper-triangular PIMs of the CJS and dead recovery data types can be developed Staggered entry worked example To demonstrate the idea, we ll consider a somewhat complex example, involving individuals radiomarked as young the complexity lies in how you handle the age-structure. Within a cohort, age and time are collinear, but with multiple release cohorts, it is possible to separate age and time effects (see Chapter 7). We simulated a dataset (staggered.inp) where individuals were radio-marked as young and followed for 5 sampling intervals assume each interval is (say) a month long. We assumed that all individuals alive and in the sample were detected, and that all fatalities (fates) were recorded (detected). We let survival in the interval following marking be 0.4, while survival in subsequent intervals (with a given cohort) was 0.8. For convenience, we ll refer to the two age classes as newborn and mature, respectively.

11 Staggered entry worked example If this were a typical age analysis (see Chapter 7), this would correspond to the following PIM structure: But, here we are considering a known-fate data, with staggered entry. To begin, let s first have a look at the.inp file the first few lines are shown below: Now, at first glance, the structure of this file might seem perfectly reasonable. There are 5 occasions, inldld format. We see, for example, there were 865 individuals marked and released in the first cohort which died during the first interval (as newborns), 126 which were marked and released in the first cohort which died during the second interval (as mature individuals), and so on. But, what about the second cohort, and the third cohort, and so on? How do we handle these additional cohorts? As mentioned, we accomplish this in the known-fate data in MARK by specifying multiple groups one group for each additional release cohort (in this case, 5 groups). However, while it is easy enough to specify 5 groups in the data specification window in MARK, we first need to modify the.inp file to indicate multiple groups. Recall from earlier chapters (especially Chapter 2), that each grouping requires a frequency column. So, 5 groups mean 5 frequency columns not just the single frequency column we start with. The fully modifiedstaggered.inp file is shown at the top of the next page (we ll let you make the modifications yourself). Notice that there are now 5 frequency columns the first frequency column corresponds to number of individuals marked and released in cohort 1, the second frequency column corresponds to the number of individuals marked and released in cohort 2, and so on. Pay particular attention to the structure of these frequency columns.

12 Staggered entry worked example Now that we ve modified the.inp file (above), we can go ahead and run the analysis in MARK. We select the known-fate data type, and specify 5 groups (which we ll label asc1,c2,...,c5, for cohort 1, cohort 2, and so on, respectively, to cohort 5):

13 Staggered entry worked example OK, so far, so good. Now for the only real complication how to structure the PIMs for each cohort, and which parameters to fix to 1, in order for the analysis to make sense. Let s consider the following 2 models for our model set: S a2 cohort, and S a 2. The first model indicates 2 age classes (newborn, and mature), with differences among cohorts. This corresponds to the following PIM structure: The second model has differences in survival among the two age classes, but no differences among cohorts. This corresponds to the following PIM structure: OK, so these are the 2 models we want to fit in MARK. The challenge is figuring out how to build them, and which parameters to fix. Clearly, the first models(a2 - cohort) is the most general (since it has the most parameters), so we ll start there. Here is the default PIM chart for these data: We see from the following PIM structure for this model that the first cohort consists of 2 age classes, as does the second, third, and so on. So, we might choose to simply right-click on the various blue-boxes in the PIM chart, and select age specifying 2 age-classes. Now, while you could, with some care, get this to work, there is an alternative approach which, while appearing to be more complex (and initially perhaps somewhat less intuitive), is in practice much easier.

14 Staggered entry worked example The key is in remembering that in the known-fates staggered entry analysis, we treat each cohort as if it were a separate group, fixing any 00 cells preceding the initial encounter in a cohort to 1.0. Again, keep in mind that each row (cohort) represents a separate group. And, as noted, we want to fix the estimate for any of the preceding 00 cells to 1.0. Where do these cells occur? We ve added them to the PIM in the following: Now for the big step if all of the 00 cells are ultimately to be fixed to 1.0, then we clearly would need only one parameter to code for them. So, let s rewrite the PIM, using the parameter 1 for the 00 cells, and then increasing the value of all of the other parameters by 1: OK, now what? Well, each cohort is a group. So, we open up the PIM chart for each of the 5 groups (cohorts) in our example analysis each of them has the same structure: a single line here is the starting PIM for cohort 1: So, remembering that we want the overall PIM structure (over all cohorts) to look like then it should be clear how to modify the PIM for cohort 1 - it needs to be modified to correspond to the first row of the overall PIM structure.

15 Staggered entry worked example In other words, for cohort 1 and for cohort 2, and so on each PIM modified to match the corresponding row (representing a specific cohort) in the overall PIM. Before we run our analysis, it s worth looking at the PIM chart for the model we ve just created: Note that the new parameter 1 occurs only in groups (cohorts) 2 to 6. The staircase pattern for parameters 2 to 6, and 7 to 10 shows that we re allowing survival to vary among release cohorts as a

16 Staggered entry worked example function of age: in the first period following marking (newborns, parameters 2 to 6), and subsequent intervals (mature, 7 to 10). Note that in cohort 5, there are no mature individuals. Now, all that is left to do is to run the model, and add the results to the browser. All you need to do is remember that parameter 1 is fixed to 1.0. Go ahead and run the model, after first fixing the appropriate parameter to 1.0 add the results to the browser call the model S(a2 - cohort) - PIM (we add the word PIM to indicate the model was built by modifying the PIMs). OK, what about the second model models(a2) (no cohort effect)? Well, if you reached this point in the book (i.e., have worked through the preceding chapters), you might realize that this model corresponds to Again, if we add a parameter 1 to indicate the 00 cells preceding the first encounter within each cohort, and subsequently increment the parameter indexing for all other parameters by 1, we get We can build this model conveniently by simply modifying the PIM chart for the preceding model S(a2 - cohort). Recall that the PIM chart for that model was (see top of the next page)

17 Staggered entry worked example So, to build models(a2), all we need to do is remove the cohort variation for parameters 2 to 6, and 7 to 10 this is shown in the modified PIM chart, below:

18 Staggered entry worked example Now, run this model, first fixing parameter 1 to 1.0, label it S(a2) - PIM, and add the results to the browser: As expected, model S(a2) (the true, underlying model which we used to generate the data) gets virtually all of the AIC weight, relative to the other model. And, the reconstituted parameter estimates are very close to the true underlying values. Now, while fiddling with the PIM chart (and the underlying PIMs) is convenient for these simple models, we know from earlier chapters that there are structural limits to the types of models we can construct this way. Most obviously, we can t use the PIM approach to build models with additive effect. Ultimately, it s to our advantage to build models using the design matrix (DM), since all reduced parameter models can be constructed simply by manipulating the structure of the DM for the most general model. Let s build the DM for model S(a2 - cohort), which is the most general model of the two models in our candidate model set). First, we start by writing out the conceptual structure of the linear model corresponding to this model: S cohort + age + cohort.age The first term is fairly straightforward we have 5 cohorts, so we need (5 1) 4 columns to code for cohort. What about age? Well, look again at the PIM for this model: Remembering that parameter 1 is fixed at 1.0, and is thus a constant. We can ignore it for the moment (although we do need to account for it in the DM). Pay close attention to the parameters along and above the diagonal. These represent each of the two age classes in our model the vary among rows within an age class, but are constant among columns within a row, specifying cohort variation for a given age class, but no time variation (recall from Chapter 7 that a fully age-, time- and cohort-dependent model is generally not identifiable, since the terms are collinear). So, we have 2 age classes, meaning we need (2 1) 1 column to code forage. What aboutcohort? Well, 5 cohorts, so (5 1) 4 columns to code for cohort. Again, hopefully familiar territory. If not, go back and re-read Chapter 6. But, what about the interaction terms (age.cohort) do we need (4 1) 4 columns? If you think back to some of the models we constructed in Chapter 7 (age and cohort models), especially those models involving individuals marked as young only you might see how we have to handle interaction terms for this model. Recall from Chapter 7 that the interaction columns in the DM reflected plausible interactions if a specific interaction of (say) age and time wasn t possible, then there was no column in the DM for that interaction. For example, for an analysis of individuals marked as young, there can be

19 Staggered entry worked example no interaction of age (young or adult) with time in the first interval, since if the sample are all marked as young, then there are no marked adults in the first interval to form the interaction (i.e., there can be no plausible interaction of age and cohort in the first interval, since only one of the two age classes is present in the first interval). OK, so what does this have to do with our known-fate data? The key word is plausible we build interactions only for interactions that are plausible, given the structure of the analysis. In this case, there are only 2 true age classes (newborn, and mature). All of the other age classes are logical we ve created them to handle the preceding 00 terms in the PIM. They are not true age classes, since there are no marked animals in those classes. As such, there are no interactions between cohort and any of these logical 00 age classes we need only consider the interactions of the two true biological age classes (newborn, and mature), with cohort. But, how many columns? Look closely again at the PIM: Pay particular attention to the fact that the newborn age class shows up in all 5 cohorts, while the mature age class shows up only in the first 4 cohorts (and not in the fifth). So, not all (age cohort) interactions are plausible. Which ones are plausible? Well, both age classes are represented in the first 4 cohorts, but both age classes are represented only over intervals 2 to 4. Thus, we only need include cohorts 2, 3 and 4, in the interaction terms. See the pattern? If not, try again. It s very similar to problems we considered in Chapter 7. OK, penultimate step what about parameter 1? Well, as noted earlier, since it s fixed to 1.0, then it s simply a constant across cohorts, and thus, enters into the linear model as a single parameter. Now, finally, we re ready to write out the linear model corresponding to S(a2 - cohort). Ŝ β 1 (constant) +β 2 (intercept) +β 3 (age) +β 4 (c 1 ) +β 5 (c 2 ) +β 6 (c 3 ) +β 7 (c 4 ) +β 8 (age.c 2 ) +β 9 (age.c 3 ) +β 10 (age.c 4 ) Is this correct? It has the same number of terms (10), as there are parameters in the PIM chart, so it would seem to be correct. The next step, then, is to actually build the DM. We start by having MARK present us with a 10- column reduced DM as the starting point. The completed DM for this model is shown at the top of the next page. Column 1 (labeledb1) contains a single 1 - this represents parameter 1, which is a constant fixed to 1.0 for all cohorts. The next column (labeledb2) represents the intercept for the age and cohort part of the model. ColumnB3 codes forage 1for newborn individuals, and0for mature individuals (note the different number of rows for each age class this is key 5 rows for newborns, and 4 rows for mature individuals). ColumnsB4 tob7 code forcohort. Note how the first row for newborn individuals for cohort 1 is coded, and note that this row does not show up for mature individuals since, in cohort 1, there are no mature individuals! Finally, the interaction terms columnsb8 tob10, for those age.cohort combinations that represent plausible interactions.

20 Staggered entry worked example Go ahead and run this DM-based model (label its(a2-cohort - DM)), and confirm that the results exactly match those for the model you constructed using the PIM chart, as shown below: Now that you have the DM for the general model, try constructing models(a2) the second model. We already did this a few pages back using the PIM approach, but we can generate the same model easily using the DM approach by simply deleting (i) the columns of the DM coding forcohort, and (ii) the (age.cohort) interaction columns:

Models Where the Fate of Every Individual is Known

Models Where the Fate of Every Individual is Known This class of models is important because they provide a theory for estimation of survival probability and other parameters from radio-tagged animals.

Joint live encounter & dead recovery data

Joint live encounter & dead recovery data CHAPTER 9 The first chapters in this book focussed on typical open population mark-recapture models, where the probability of an individual being seen was defined

Design and Analysis of Ecological Data Conceptual Foundations: Ec o lo g ic al Data

Design and Analysis of Ecological Data Conceptual Foundations: Ec o lo g ic al Data 1. Purpose of data collection...................................................... 2 2. Samples and populations.......................................................

Program MARK: Survival Estimation from Populations of Marked Animals

Program MARK: Survival Estimation from Populations of Marked Animals GARY C. WHITE 1 and KENNETH P. BURNHAM 2 1 Department of Fishery and Wildlife Biology, Colorado State University, Fort Collins, CO 80523,

Multivariate Logistic Regression

1 Multivariate Logistic Regression As in univariate logistic regression, let π(x) represent the probability of an event that depends on p covariates or independent variables. Then, using an inv.logit formulation

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data 1. Purpose of data collection...................................................... 2 2. Samples and populations.......................................................

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth,

1147-38951 73 1147-38951 75 1147-38951 76 1147-38951 82 1147-45453 74 1147-45453 78

CHAPTER 2 Data formatting: the input file... Clearly, the first step in any analysis is gathering and collating your data. We ll assume that at the minimum, you have records for the individually marked

Tips for surviving the analysis of survival data. Philip Twumasi-Ankrah, PhD

Tips for surviving the analysis of survival data Philip Twumasi-Ankrah, PhD Big picture In medical research and many other areas of research, we often confront continuous, ordinal or dichotomous outcomes

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

SUGI 29 Statistics and Data Analysis

Paper 194-29 Head of the CLASS: Impress your colleagues with a superior understanding of the CLASS statement in PROC LOGISTIC Michelle L. Pritchard and David J. Pasta Ovation Research Group, San Francisco,

11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

THE MULTINOMIAL DISTRIBUTION. Throwing Dice and the Multinomial Distribution

THE MULTINOMIAL DISTRIBUTION Discrete distribution -- The Outcomes Are Discrete. A generalization of the binomial distribution from only 2 outcomes to k outcomes. Typical Multinomial Outcomes: red A area1

(Common) Shock models in Exam MLC

Making sense of... (Common) Shock models in Exam MLC James W. Daniel Austin Actuarial Seminars www.actuarialseminars.com June 26, 2008 c Copyright 2007 by James W. Daniel; reproduction in whole or in part

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

Gamma Distribution Fitting

Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

Multi-state transition models with actuarial applications c

Multi-state transition models with actuarial applications c by James W. Daniel c Copyright 2004 by James W. Daniel Reprinted by the Casualty Actuarial Society and the Society of Actuaries by permission

Poisson Models for Count Data

Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

T O P I C 1 2 Techniques and tools for data analysis Preview Introduction In chapter 3 of Statistics In A Day different combinations of numbers and types of variables are presented. We go through these

Using Excel for Statistics Tips and Warnings

Using Excel for Statistics Tips and Warnings November 2000 University of Reading Statistical Services Centre Biometrics Advisory and Support Service to DFID Contents 1. Introduction 3 1.1 Data Entry and

Life Insurance Modelling: Notes for teachers. Overview

Life Insurance Modelling: Notes for teachers In this mathematics activity, students model the following situation: An investment fund is set up Investors pay a set amount at the beginning of 20 years.

Module 9: Nonparametric Tests. The Applied Research Center

Module 9: Nonparametric Tests The Applied Research Center Module 9 Overview } Nonparametric Tests } Parametric vs. Nonparametric Tests } Restrictions of Nonparametric Tests } One-Sample Chi-Square Test

Logistic regression: Model selection

Logistic regression: April 14 The WCGS data Measures of predictive power Today we will look at issues of model selection and measuring the predictive power of a model in logistic regression Our data set

SCRATCHING THE SURFACE OF PROBABILITY. Robert J. Russell, University of Paisley, UK

SCRATCHING THE SURFACE OF PROBABILITY Robert J. Russell, University of Paisley, UK Scratch cards are widely used to promote the interests of charities and commercial concerns. They provide a useful mechanism

Technology Step-by-Step Using StatCrunch

Technology Step-by-Step Using StatCrunch Section 1.3 Simple Random Sampling 1. Select Data, highlight Simulate Data, then highlight Discrete Uniform. 2. Fill in the following window with the appropriate

Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

1 Uncertainty and Preferences

In this chapter, we present the theory of consumer preferences on risky outcomes. The theory is then applied to study the demand for insurance. Consider the following story. John wants to mail a package

Inequality, Mobility and Income Distribution Comparisons

Fiscal Studies (1997) vol. 18, no. 3, pp. 93 30 Inequality, Mobility and Income Distribution Comparisons JOHN CREEDY * Abstract his paper examines the relationship between the cross-sectional and lifetime

Ordinal Regression. Chapter

Ordinal Regression Chapter 4 Many variables of interest are ordinal. That is, you can rank the values, but the real distance between categories is unknown. Diseases are graded on scales from least severe

Hypothesis Testing. Chapter Introduction

Contents 9 Hypothesis Testing 553 9.1 Introduction............................ 553 9.2 Hypothesis Test for a Mean................... 557 9.2.1 Steps in Hypothesis Testing............... 557 9.2.2 Diagrammatic

Minitab Guide. This packet contains: A Friendly Guide to Minitab. Minitab Step-By-Step

Minitab Guide This packet contains: A Friendly Guide to Minitab An introduction to Minitab; including basic Minitab functions, how to create sets of data, and how to create and edit graphs of different

It is important to bear in mind that one of the first three subscripts is redundant since k = i -j +3.

IDENTIFICATION AND ESTIMATION OF AGE, PERIOD AND COHORT EFFECTS IN THE ANALYSIS OF DISCRETE ARCHIVAL DATA Stephen E. Fienberg, University of Minnesota William M. Mason, University of Michigan 1. INTRODUCTION

It ain t what you do, it s

It ain t what you do, it s In this article, David Harris looks at the importance of verbs, both in the syllabus and in exam questions, and discusses how students should approach exam questions in order

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln. Log-Rank Test for More Than Two Groups

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln Log-Rank Test for More Than Two Groups Prepared by Harlan Sayles (SRAM) Revised by Julia Soulakova (Statistics)

Probability and Expected Value

Probability and Expected Value This handout provides an introduction to probability and expected value. Some of you may already be familiar with some of these topics. Probability and expected value are

Multinomial and Ordinal Logistic Regression

Multinomial and Ordinal Logistic Regression ME104: Linear Regression Analysis Kenneth Benoit August 22, 2012 Regression with categorical dependent variables When the dependent variable is categorical,

SPSS Workbook 1 Data Entry : Questionnaire Data

TEESSIDE UNIVERSITY SCHOOL OF HEALTH & SOCIAL CARE SPSS Workbook 1 Data Entry : Questionnaire Data Prepared by: Sylvia Storey s.storey@tees.ac.uk SPSS data entry 1 This workbook is designed to introduce

A Method of Population Estimation: Mark & Recapture

Biology 103 A Method of Population Estimation: Mark & Recapture Objectives: 1. Learn one method used by wildlife biologists to estimate population size of wild animals. 2. Learn how sampling size effects

1. How different is the t distribution from the normal?

Statistics 101 106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M 7.1 and 7.2, ignoring starred parts. Reread M&M 3.2. The effects of estimated variances on normal approximations. t-distributions.

Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure?

Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure? Harvey Motulsky hmotulsky@graphpad.com This is the first case in what I expect will be a series of case studies. While I mention

2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

How To Run Statistical Tests in Excel

How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting

Chi Square Tests. Chapter 10. 10.1 Introduction

Contents 10 Chi Square Tests 703 10.1 Introduction............................ 703 10.2 The Chi Square Distribution.................. 704 10.3 Goodness of Fit Test....................... 709 10.4 Chi Square

A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

CHAPTER 5. A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING 5.1 Concepts When a number of animals or plots are exposed to a certain treatment, we usually estimate the effect of the treatment

Binomial Sampling and the Binomial Distribution

Binomial Sampling and the Binomial Distribution Characterized by two mutually exclusive events." Examples: GENERAL: {success or failure} {on or off} {head or tail} {zero or one} BIOLOGY: {dead or alive}

Today s lecture. Lecture 6: Dichotomous Variables & Chi-square tests. Proportions and 2 2 tables. How do we compare these proportions

Today s lecture Lecture 6: Dichotomous Variables & Chi-square tests Sandy Eckel seckel@jhsph.edu Dichotomous Variables Comparing two proportions using 2 2 tables Study Designs Relative risks, odds ratios

Cracking the Sudoku: A Deterministic Approach

Cracking the Sudoku: A Deterministic Approach David Martin Erica Cross Matt Alexander Youngstown State University Center for Undergraduate Research in Mathematics Abstract The model begins with the formulation

Logit Models for Binary Data

Chapter 3 Logit Models for Binary Data We now turn our attention to regression models for dichotomous data, including logistic regression and probit analysis. These models are appropriate when the response

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

Models for Count Data With Overdispersion

Models for Count Data With Overdispersion Germán Rodríguez November 6, 2013 Abstract This addendum to the WWS 509 notes covers extra-poisson variation and the negative binomial model, with brief appearances

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation

CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 3 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected

b) No individual covariates 4. Some models can be tested using MARK s median ĉ or bootstrapped GOF simulations. a) No individual covariates

Natural Resources Data Analysis Lecture Notes Brian R. Mitchell VII. Week 7: A. Goodness of Fit Testing Mark-Recapture models 1. Testing goodness of fit gets difficult as models become more complex. Mark-recapture

Analysis of Environmental Data Problem Set Conceptual Foundations: Hy p o th e sis te stin g c o n c e p ts

Analysis of Environmental Data Problem Set Conceptual Foundations: Hy p o th e sis te stin g c o n c e p ts Answer any one of questions 1-4, AND answer question 5: 1. Consider a hypothetical study of wood

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

For example, enter the following data in three COLUMNS in a new View window.

Statistics with Statview - 18 Paired t-test A paired t-test compares two groups of measurements when the data in the two groups are in some way paired between the groups (e.g., before and after on the

REWARD System For Even Money Bet in Roulette By Izak Matatya

REWARD System For Even Money Bet in Roulette By Izak Matatya By even money betting we mean betting on Red or Black, High or Low, Even or Odd, because they pay 1 to 1. With the exception of the green zeros,

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com

UNDERSTANDING THE TWO-WAY ANOVA

UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables

5.3 SAMPLING Introduction Overview on Sampling Principles. Sampling

Sampling 5.3 SAMPLING 5.3.1 Introduction Data for the LULUCF sector are often obtained from sample surveys and typically are used for estimating changes in land use or in carbon stocks. National forest

A Mathematical Study of Purchasing Airline Tickets

A Mathematical Study of Purchasing Airline Tickets THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Mathematical Science in the Graduate School of The Ohio State University

Chapter 5 Analysis of variance SPSS Analysis of variance

Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,

6 Scalar, Stochastic, Discrete Dynamic Systems

47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1

WHO STEPS Surveillance Support Materials. STEPS Epi Info Training Guide

STEPS Epi Info Training Guide Department of Chronic Diseases and Health Promotion World Health Organization 20 Avenue Appia, 1211 Geneva 27, Switzerland For further information: www.who.int/chp/steps WHO

HLM software has been one of the leading statistical packages for hierarchical

Introductory Guide to HLM With HLM 7 Software 3 G. David Garson HLM software has been one of the leading statistical packages for hierarchical linear modeling due to the pioneering work of Stephen Raudenbush

Sampling Distributions and the Central Limit Theorem

135 Part 2 / Basic Tools of Research: Sampling, Measurement, Distributions, and Descriptive Statistics Chapter 10 Sampling Distributions and the Central Limit Theorem In the previous chapter we explained

Chapter 9. Two-Sample Tests. Effect Sizes and Power Paired t Test Calculation

Chapter 9 Two-Sample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and Two-Sample Tests: Paired Versus

Introduction to time series analysis

Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

10. Analysis of Longitudinal Studies Repeat-measures analysis

Research Methods II 99 10. Analysis of Longitudinal Studies Repeat-measures analysis This chapter builds on the concepts and methods described in Chapters 7 and 8 of Mother and Child Health: Research methods.

Indices of Model Fit STRUCTURAL EQUATION MODELING 2013

Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit A recommended minimal set of fit indices that should be reported and interpreted when reporting the results of SEM analyses:

The Math. P (x) = 5! = 1 2 3 4 5 = 120.

The Math Suppose there are n experiments, and the probability that someone gets the right answer on any given experiment is p. So in the first example above, n = 5 and p = 0.2. Let X be the number of correct

Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

Life Table Analysis using Weighted Survey Data

Life Table Analysis using Weighted Survey Data James G. Booth and Thomas A. Hirschl June 2005 Abstract Formulas for constructing valid pointwise confidence bands for survival distributions, estimated using

Survival Analysis of Left Truncated Income Protection Insurance Data. [March 29, 2012]

Survival Analysis of Left Truncated Income Protection Insurance Data [March 29, 2012] 1 Qing Liu 2 David Pitt 3 Yan Wang 4 Xueyuan Wu Abstract One of the main characteristics of Income Protection Insurance

Introducing the Multilevel Model for Change

Department of Psychology and Human Development Vanderbilt University GCM, 2010 1 Multilevel Modeling - A Brief Introduction 2 3 4 5 Introduction In this lecture, we introduce the multilevel model for change.

7.1 The Hazard and Survival Functions

Chapter 7 Survival Models Our final chapter concerns models for the analysis of data which have three main characteristics: (1) the dependent variable or response is the waiting time until the occurrence

Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as

LOGISTIC REGRESSION Nitin R Patel Logistic regression extends the ideas of multiple linear regression to the situation where the dependent variable, y, is binary (for convenience we often code these values

0 Introduction to Data Analysis Using an Excel Spreadsheet

Experiment 0 Introduction to Data Analysis Using an Excel Spreadsheet I. Purpose The purpose of this introductory lab is to teach you a few basic things about how to use an EXCEL 2010 spreadsheet to do

MINITAB ASSISTANT WHITE PAPER

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

Hypothesis Testing. Concept of Hypothesis Testing

Quantitative Methods 2013 Hypothesis Testing with One Sample 1 Concept of Hypothesis Testing Testing Hypotheses is another way to deal with the problem of making a statement about an unknown population

Regression 3: Logistic Regression

Regression 3: Logistic Regression Marco Baroni Practical Statistics in R Outline Logistic regression Logistic regression in R Outline Logistic regression Introduction The model Looking at and comparing

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

Lecture 10: Regression Trees

Lecture 10: Regression Trees 36-350: Data Mining October 11, 2006 Reading: Textbook, sections 5.2 and 10.5. The next three lectures are going to be about a particular kind of nonlinear predictive model,

Parametric Models. dh(t) dt > 0 (1)

Parametric Models: The Intuition Parametric Models As we saw early, a central component of duration analysis is the hazard rate. The hazard rate is the probability of experiencing an event at time t i

Crosstabulation & Chi Square

Crosstabulation & Chi Square Robert S Michael Chi-square as an Index of Association After examining the distribution of each of the variables, the researcher s next task is to look for relationships among

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015

Interaction effects and group comparisons Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 20, 2015 Note: This handout assumes you understand factor variables,

Chapter 6. The stacking ensemble approach

82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described

Classification and Regression Trees

Classification and Regression Trees Bob Stine Dept of Statistics, School University of Pennsylvania Trees Familiar metaphor Biology Decision tree Medical diagnosis Org chart Properties Recursive, partitioning

Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

COLLEGE ALGEBRA. Paul Dawkins

COLLEGE ALGEBRA Paul Dawkins Table of Contents Preface... iii Outline... iv Preliminaries... Introduction... Integer Exponents... Rational Exponents... 9 Real Exponents...5 Radicals...6 Polynomials...5

What mathematical optimization can, and cannot, do for biologists. Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL

What mathematical optimization can, and cannot, do for biologists Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL Introduction There is no shortage of literature about the

Investment Decision Analysis

Lecture: IV 1 Investment Decision Analysis The investment decision process: Generate cash flow forecasts for the projects, Determine the appropriate opportunity cost of capital, Use the cash flows and

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the