NISP, Bone Fragmentation, and the Measurement of Taxonomic Abundance

Size: px
Start display at page:

Download "NISP, Bone Fragmentation, and the Measurement of Taxonomic Abundance"

Transcription

1 NISP, Bone Fragmentation, and the Measurement of Taxonomic Abundance Michael D. Cannon 1 1 SWCA Environmental Consultants 257 E. 200 S., Suite 200 Salt Lake City, UT Phone: (801) Fax: (801) mcannon@swca.com Running Head: NISP, Bone Fragmentation, and the Measurement of Taxonomic Abundance 1

2 Abstract Zooarchaeologists have long recognized that NISP is dependent on the degree to which bones are fragmented, but rarely are attempts made to control for the effects of fragmentation on NISP. This paper provides insight into those effects by presenting both a formal model of the relationship between NISP and fragmentation and experimental data on that relationship. The experimental data have practical implications regarding the effectiveness of potential measures of bone fragmentation, suggesting that specimen size which can be determined easily through digital image analysis is more useful than other variables that have been or might be used as fragmentation measures. Key Words: zooarchaeology, taphonomy, experimental archaeology, digital image analysis 2

3 Introduction The number of identified specimens (NISP) is the simplest measure of taxonomic abundance available to zooarchaeologists, and it is probably also the most commonly used. It has long been recognized, however, that NISP is far from perfect as a taxonomic abundance measure (e.g., Grayson 1984; Klein and Cruz-Uribe 1984; Marshall and Pilgram 1993). Among the problems that have been noted with NISP is that it varies not only with taxonomic abundance, but also with the degree to which bones have been fragmented: breaking bones into more pieces means more pieces that can potentially be identified, and hence potentially higher NISP values. Even though zooarchaeologists have long acknowledged this, rarely do we attempt to control for the effects of fragmentation on NISP when using it to measure taxonomic abundance. This is despite the fact that, without doing so, we cannot truly know whether variability in NISP is simply telling us about variability in fragmentation rather than about variability in taxonomic abundance. As will be shown below, potentially significant conclusions about prehistory that are based upon patterns in archaeofaunal taxonomic abundance may be confounded by differential rates of fragmentation among faunal samples. This paper takes steps towards developing methods for better dealing with this problem by presenting 1) a model of the relationship between NISP and fragmentation rate, 2) experimental data on the shape of that relationship, and 3) experimental data on the effectiveness of potential measures of fragmentation that might be used to control for the effects of this variable in NISP-based analyses of taxonomic abundance. Is Fragmentation a Problem Worth Worrying About? Before getting into the model and the experiment that are the focus of this paper, it is worthwhile asking whether this model and experiment are really necessary. That is, are there reasons to think that fragmentation might truly be a significant confounding variable in NISP- 3

4 based analyses of taxonomic abundance, to the extent that important conclusions about the human past may be at risk? A brief example from the Mimbres Valley of southwestern New Mexico indicates that it very well might. Previous research from this region has shown significant changes over time in the archaeofaunal abundance of artiodactyls relative to lagomorphs as measured by the Artiodactyl Index and it has been argued that this reflects changes in the proportions of these prey types that hunters captured, changes that were, in turn, a result of human population growth and attendant depression of artiodactyl resources (Broughton et al. 2010; Cannon 2001, 2003; Nelson and LeBlanc 1986; also see Broughton et al for a recent overview of the use of relative abundance measures like the Artiodactyl Index in resource depression studies). This previous research is illustrated in Table 1 by samples that date from the Early Pithouse period through the Terminal Classic phase of the Mimbres culture historical sequence (i.e., from the early A.D. 400s through the early A.D. 1100s). The Artiodactyl Index declines steeply during the initial periods of occupation of the Mimbres Valley by agriculturalists (i.e., from the Early Pithouse period to the Three Circle phase) and then remains steadily low during the periods when the human population of the valley was evidently larger (i.e., from the Three Circle phase through the Terminal Classic). It has also been suggested that artiodactyl populations subsequently rebounded after the Terminal Classic following a substantial decline in the size of the human population in the Mimbres Valley (Nelson and LeBlanc 1986). Data from currently-underway analyses of later assemblages from the valley can be used to address this issue and are illustrated in Table 1 by a sample from the Stailey site (Nelson and LeBlanc 1986), which dates to the Cliff phase (A.D ). Consistent with the suggestion of an artiodactyl population rebound, the Artiodactyl 4

5 Index rises in this Cliff phase sample to a level nearly as high as that seen in the earliest Mimbres Valley sample. It is premature, however, to conclude that this pattern in the NISP-based Artiodactyl Index truly reflects a Cliff phase increase in artiodactyl taxonomic abundance. Rather, it may simply be a result of differential fragmentation of artiodactyl and lagomorph bones, as is illustrated in Table 2. This table shows average proportions of bone density photodensitometer scan sites (Lyman 1984; Pavao and Stahl 1999), a variable that I have used in previous Mimbres zooarchaeological research as a measure of bone fragmentation (Cannon 2001). In that previous research, the bone density scan sites present on faunal specimens were recorded for purposes of evaluating the strength of density-mediated attrition (e.g., Lyman 1984, 1985), but it also became apparent that those data could be used as a measure of bone fragmentation because the proportion of the total number of possible scan sites that are actually present on a specimen is related to the degree to which bones are broken. For example, a complete femur would possess 6 of 6 possible scan sites, for a scan site proportion of 1.0, but if that bone is broken in half, each piece might possess only 3 scan sites, for an average scan site proportion across the two specimens of 0.5. For the artiodactyls in the samples included in Table 1, the mean proportion of scan sites per specimen is much lower in the Cliff phase sample (0.14) than in the earlier samples (0.25 to 0.45), indicating that the bones in the later sample are more heavily fragmented than are those in the earlier samples (Table 2). Conversely, for the lagomorphs, mean scan site proportion is much higher in the Cliff phase sample (0.62) than the earlier samples (0.17 to 0.42), indicating lesser fragmentation in the latest sample. Thus, it is conceivable that the high Artiodactyl Index value observed in the Cliff phase sample is (at least in part) a result of a highly inflated artiodactyl 5

6 NISP value caused by relatively high fragmentation, in combination with a lagomorph NISP value that is less inflated than most due to relatively low fragmentation. In other words, the high Cliff phase Artiodactyl Index value may have as much or more to do with differential fragmentation than with any actual changes in the proportions of various prey that prehistoric Mimbres hunters captured. The causes of the differences in fragmentation may certainly be interesting in their own right, but those causes are beside the main point to be made here, which is that fragmentation, whatever its cause, might confound NISP-based analyses of taxonomic abundance that are no less interesting. The remainder of this paper is intended to be a step towards better understanding the relationship between bone fragmentation and NISP, and better measuring fragmentation in archaeofaunal assemblages, so that we can begin to have greater confidence that we are measuring what we actually want to measure when using NISP to explore variability in taxonomic abundance. This is done by first presenting a model of the relationship between NISP and fragmentation that is more explicit than any developed to-date, and by then presenting experimental data that can be used both to evaluate the model and to evaluate potential measures of bone fragmentation. What Does NISP Measure? In this section I present an algebraic model designed to capture the relationship between NISP and fragmentation rate, given some number of animal carcasses deposited in an archaeological context. This model builds on the pioneering consideration of the issue presented by Marshall and Pilgram (1993), who note that the effects of fragmentation on NISP are likely to be somewhat complex (Figure 1). They point out that, for relatively low fragmentation rates (i.e., relatively few fragments per carcass or per skeletal element), NISP should increase as the 6

7 fragmentation rate increases simply because more specimens (i.e., pieces of bone) are being created. As the rate of fragmentation reaches a certain point, however, they argue that NISP should begin to decline with further increases in fragmentation rate because the proportion of specimens that are identifiable should decline with reductions in average specimen size (see also Lyman and O Brien 1987; Watson 1972). A relationship such as the one described by Marshall and Pilgram (1993) can be captured more formally by the equation, N ij = I ij F ij S ij, (eq. 1) where N ij is the number of identified specimens of taxon i recovered from the jth context (e.g., stratum, feature, room, site, etc.), I ij is the number of individuals of taxon i originally deposited in context j, F ij is the fragmentation rate, or the number of specimens created per individual, and S ij is the survival rate, or the proportion of specimens that survive to be identified (with a value ranging from zero to one because it is a proportion). The variable I ij, the number of individuals originally deposited, is usually what is of primary interest when NISP is used to measure taxonomic abundance, but, of course, NISP is also affected by the other variables of bone fragmentation and survivorship. The remainder of the discussion of this model and the experimental results presented below facilitate understanding exactly how those other variables affect NISP. Framing this equation in terms of the number of deposited individuals of a taxon (I ij ) amounts to assuming that the carcasses of those individuals were deposited whole. This is certainly an unrealistic assumption in many cases: indeed, an extremely large amount of archaeological and ethnographic research has been inspired by the fact that humans often leave different parts of vertebrate carcasses in different places (see, e.g., Binford, 1978; Lyman 7

8 1994: ; Rogers 2000; White 1952, 1953). A more realistic, but also more complicated, model analogous to the one presented here could be developed allowing for different probabilities of deposition for different skeletal elements or portions of elements, thereby relaxing the assumption that carcasses were deposited whole. This would enable, for example, exploration of such issues as differential fragmentation and survivorship among parts of the skeleton, issues that are certainly important in zooarchaeology (e.g., Lyman 1984, 1985). However, to keep things simple, such a model is not presented here, and I discuss the model that is presented here solely in terms of numbers of individuals, recognizing that this is a simplifying convenience. Before exploring the model further, a few additional details should also be noted. First, because I am assuming here that carcasses are deposited whole, I consider the minimum possible fragmentation rate to be equal to the number of identifiable whole bones in a complete skeleton of an individual of taxon i; below, I represent this constant by the symbol β i (so, F ij must be greater than or equal to β i ). In addition, it is important to keep in mind that fragmentation rates and survival rates will surely vary among taxa and even among samples of specimens of a single taxon recovered from different contexts (hence the subscripts i and j). Moreover, since the survival rate, as defined here, is the proportion of specimens that survive to be identified by a faunal analyst, and since different analysts will certainly vary in their ability to identify fragmentary specimens with confidence, it is likely that survival rates will vary not just among taxa and among depositional contexts, but among analysts as well. This model can be developed further by recognizing that the survival rate is itself a function of the fragmentation rate. As many have noted (e.g., Lyman and O Brien 1987; Marshall and Pilgram 1993; Watson 1972), the proportion of specimens that are identifiable to 8

9 taxon should decline as bones are broken into increasingly small pieces. In addition, as fragmentation rates increase, it is likely that increasing proportions of specimens will go unrecovered by archaeologists (e.g., because higher proportions of them will fall through the screens used in excavation), and it is possible that increasing proportions of specimens will succumb to chemical processes that can remove bone from the archaeological record prior to excavation (e.g., Cannon 1999; Lyman 1994; Stiner et al. 2001). Since survival, as the term is used here, requires that specimens be first recovered and then identified, all of these factors should cause survival rates to decline as fragmentation rates increase. To explore the relationship between survival rate and fragmentation rate, the variable loss rate must first be introduced. This is simply the complement of the survival rate or the proportion of specimens per individual that do not survive to be identified and it is related to the survival rate by the equation, S ij = 1 L ij, (eq. 2) where L ij is the loss rate. The loss rate must equal zero when fragmentation is minimal (i.e., when F ij = β i ), and, given the above consideration of the relationship between survival rate and fragmentation rate, it should increase as the fragmentation rate increases. If the survival rate declines as a linear function of the fragmentation rate, then the function that describes the relationship between the loss rate and the fragmentation rate would be L ij = σ (F ij β i ), (eq. 3) where σ is a constant that determines the slope of the relationship; this constant must have a value that is positive but very small (i.e., much less than one) for any specimens to remain identifiable at most levels of fragmentation. A loss function of this sort and its corresponding survival function are illustrated in Figure 2A. Linear relationships like those in Figure 2A are 9

10 the simplest possible relationships that can be assumed, but the experiment that I discuss below suggests that such relationships are unrealistic. Rather, in that experiment, the survival rate first begins to decline steeply as the fragmentation rate increases and then levels off somewhat at higher fragmentation rates. Figure 2B presents a survival function of this sort along with its corresponding loss function. Such non-linear relationships can be modeled by using for the loss rate equation a power function of the form, L ij = σ (F ij β i ) ε, (eq. 4) where ε has a value that ranges from zero to one (noting that equation 3 is simply a special case of equation 4 in which ε = 1). Given equations 2 and 4, equation 1 can be rewritten as N ij = I ij F ij {1 [σ (F ij β i ) ε ]}. (eq. 5) Figure 3 graphs equation 5 for different values of I ij, ε, and σ, and note that the NISP functions shown in this figure take the unimodal shape postulated by Marshall and Pilgram (1993:Figure 3). In fact, if the survival rate is a declining function of fragmentation rate, as it surely must be, then it is mathematically unavoidable that NISP functions will take such a shape 1. Two points worth discussing emerge from this exercise. First, as can be seen in equation 5, NISP can be expressed in terms of just two variables (in addition to the constants β, ε and σ): the number of individuals deposited and the fragmentation rate. This, of course, is simply another way of saying that NISP is a measure of both of these variables. It is also another way of 1 A proof of this for the simple case where ε = 1 (in which case the math is relatively tractable) is that the first derivative of the NISP function will equal zero when F = β/2 + 1/2σ and that it will be positive at lower values of F and negative at higher values of F (i.e., the NISP function will be upside-down U-shaped with a maximum where F = β/2 + 1/2σ). 10

11 saying that, for the large number of archaeological research questions that require us to measure something along the lines of the number of individual animals deposited, fragmentation is a potential confounding variable whose effects must be controlled. It remains to be determined, however, precisely how much error fragmentation might introduce into analyses of taxonomic abundance that use NISP, which leads to the second point. As can be seen in Figure 3, which presents simulated NISP functions based on the survival and loss functions shown in Figure 2, the degree to which NISP will be affected by fragmentation depends on the shape of the survival function. The values of I and β are identical in Figures 3A and 3B, and in both figures loss becomes complete at approximately the same fragmentation rate. The sole difference between the two figures is that the NISP functions in Figure 3A are based on a linear survival function (Figure 2A), whereas those in Figure 3B are based on a non-linear survival function in which survival initially declines steeply and then levels off somewhat (Figure 2B). Because of this difference in survival function, the maximum NISP values reached in Figure 3B are much lower than (in fact, approximately half the size of) the maximum NISP values reached in Figure 3A. Obviously, we would be much better off if, in the real world, NISP behaved more like it does in Figure 3B than it does in Figure 3A: flatter NISP functions would mean that differences in fragmentation rates would lead to smaller differences in NISP values, and we could be more confident that variability in NISP values truly reflected variability in taxonomic abundance rather than simply variability in rates of fragmentation. I next present the results of an experiment designed to further explore the NISP model presented here. These experimental results can be used to a certain extent to draw inferences about the degree to which NISP values are likely to be affected by fragmentation rates in the real world. More important, the experiment provides an evaluation of methods that might be used to 11

12 determine empirically the degree to which NISP values have been affected by fragmentation in archaeological applications. Even if the effects of fragmentation on NISP are likely to be less severe than the worst-case scenario that could be imagined (e.g., more like the scenario depicted in Figure 3B than the one depicted in Figure 3A), NISP values will still vary to some degree with fragmentation, and the effects of fragmentation must be controlled before full confidence can be placed in conclusions about such things as past human subsistence that are derived from analyses that employ NISP. What Do NISP Functions Look Like in the Real World? In some sense, the usefulness of NISP, as just noted, depends to a certain degree on the answer to the question, what do survival functions and NISP functions look like in the real world?. We can begin to answer this question by considering the results of an experiment designed to explore empirically how variability in fragmentation rate affects survival rate and NISP. One purpose of this experiment can be thought of as validating the model just presented: i.e., evaluating whether NISP actually behaves as the model and the original postulation of Marshall and Pilgram (1993) predict. Methods Carrying out such an experiment requires a way of systematically varying the degree to which bones are fragmented so that the response of NISP to increases in fragmentation can be observed. To do this, I constructed a device that can fragment the bones of small animals in a controlled manner (Figure 4). This device, affectionately named the Bone Crusher, consists of two plywood-reinforced 9 baking pans, between which bones are placed. Large C-clamps are used to apply force to the pans, thereby crushing the bones, and metal spacers placed between the 12

13 pans limit the degree to which the bones are fragmented. By progressively reducing the size of the spacers, the fragmentation rate can be progressively increased. The experiment described here used the skeletons of two domestic rabbits (Oryctolagus cuniculus) and one domestic cat (Felis domesticus). The skeletons of two different taxa were used and were mixed together so that the analyst would be placed in a realistic position of having to make taxonomic identifications based on osteological characteristics: this would not be the case if all specimens were from a single taxon because the taxon of each specimen would be known a priori. However, because it was necessary to do this, fragmentation rates and survival rates cannot be calculated for each taxon individually in the experiment: unidentified specimens are included in the calculation of these variables, and the unidentified specimens from the mixedtaxon samples, by definition, cannot be attributed to one taxon or the other. The skeletons were purchased from commercial suppliers so that the bones would be as complete as possible and so that they would be uniform in color (bleached white), thereby precluding the possibility that specimens might be identified to taxon based on color rather than on osteology. Likewise, two taxa of roughly similar body size were used so that size-related characteristics (e.g., cortical bone thickness) would not provide clues for taxonomic identification. The complete skeletons were used with the exception of these elements: vertebrae, ribs, sternebrae, clavicles, carpals, tarsals other than the astragalus and calcaneus, and second and third phalanges. The same three skeletons were used in each round of the experiment, and bones were identified by a single analyst (the author) using the standard zooarchaeological procedure of comparison with reference skeletons. The experiment was conducted in a series of rounds. In each round, bones were fragmented in the Bone Crusher to a degree that was determined by the spacer height. The 13

14 specimens were then screened through nested geological sieves using a Ro-Tap electric sieve shaker to mimic archaeological recovery techniques (with the time and intensity of shaking held constant for each round), the specimens from each screen size fraction were identified and counted, and the process was then repeated in the next round using a shorter spacer that allowed for greater fragmentation. Seven rounds of bone crushing were conducted, a number that was sufficient for evaluating the model presented above. Data were also collected at round 0 for the unfragmented skeletons prior to any crushing. In addition to the taxonomic identification of each specimen ( rabbit, cat, or unidentified ), several other variables were also recorded at each round, and these are described below with discussions of methods as appropriate. Sieve mesh sizes used were 1/4 (6.4 mm), 1/8 (3.2 mm), and 2 mm. All specimens retained in each of these mesh sizes were counted and identified after each round, while material that fell through the 2 mm screen was captured and weighed but not counted. Thus, in discussing the experiment results, the term specimen refers to any piece of bone, identifiable or unidentifiable, that was retained in the 2 mm or larger screen. The data on identified specimens that are presented below are for specimens retained in 1/8 or larger screens (i.e., for specimens from both the 1/8 and 1/4 screen fractions). This keeps these data relevant to what may be the smallest screen size commonly used for large-scale recovery of archaeofaunal remains. Because nested screens were used, data from the experiment could be used to explore interactions between NISP functions and variability in screen size (relating the variable of survival rate used in this paper to the variable of recovery rate used in the model of screen size effects discussed in Cannon 1999), but that is not done here so that the focus of this paper can be kept on the relationship between NISP and fragmentation. 14

15 Results Data from this experiment that are relevant to evaluating the model discussed above are presented in Table 3. Importantly, the experiment provides empirical support for the hypothesis that NISP values should first rise at lower levels of fragmentation and then decline at higher levels. Figure 5 depicts the NISP functions that resulted from the experiment for both rabbit and cat specimens: the functions for both taxa take the unimodal shape that to this point has remained hypothetical in zooarchaeology. Maximum NISP values observed in the experiment are approximately 3.1 times and 2.3 times the starting whole bone values for rabbit and cat, respectively (352/114 for rabbit and 133/58 for cat). These maximum NISP values occur in rounds 5 and 6, where the fragmentation rate is, respectively, 13.3 times and 14.3 times the starting value for whole bones (760.67/57.33 for round 5 and /57.33 for round 6). In other words, the decline in specimen survivorship evidently overtakes the increase in specimen production at around a level of fragmentation that equates to each bone being broken, on average, into about 13 or 14 pieces, and NISP declines beyond that level of fragmentation. Of course, the values discussed here are specific to the taxa and elements, mode of fragmentation (mechanical crushing), post-depositional taphonomic history (none), recovery method (1/8 screen), and zooarchaeological analyst involved in this experiment, and they are likely not generalizable on an absolute scale to other sets of conditions. They may, however, provide at least an order-of-magnitude scale indication of the degree to which NISP values might be inflated by fragmentation, as well as the level of fragmentation at which that inflation will be the greatest. Also importantly, the experiment produced a survival function that is decidedly nonlinear, as is illustrated in Figure 6 (and again, as was noted above, this survival function is not 15

16 taxon-specific because specimens of the two taxa had to be mixed together in the experiment to produce realistic identification conditions). The survival rate for round 0, where all bones were whole, is 1.0 (indicating that all of the elements used in the experiment for both taxa can be recovered in 1/8 screen when unbroken), but it declines steeply after the first few rounds of bone crushing, and it then begins to level off. The initial steep decline must surely be the result of the creation of many small specimens at the earliest stages of crushing that are unidentifiable and/or are not recovered in 1/8 screen. As was discussed above, this high initial loss is actually good news in that, had all of those small specimens been recovered and been identifiable, NISP values would have been inflated well beyond what was actually observed in the experiment. Still, even though it may be possible to conclude from this experiment that the effects of fragmentation on NISP are less problematic than the worst-case scenario that one might imagine, it cannot be denied that NISP does vary to some degree with fragmentation, perhaps by a factor of 2 or 3. Because this is undesirable when NISP is being used to measure taxonomic abundance, the issue now becomes determining whether there are useful ways of measuring fragmentation such that its effects on NISP can be controlled. What is the Best Way to Measure Fragmentation? This leads to the second, and perhaps more important, purpose of the Bone Crusher experiment, which was to evaluate potential measures of bone fragmentation. The effects of fragmentation on NISP cannot be controlled for unless there is a way to measure fragmentation, and the best way to know whether some variable is a useful measure of fragmentation is to directly observe how it responds to changes in fragmentation. Of course, zooarchaeologists have usually in the context of investigating issues such as carcass processing intensity rather than for purposes of controlling for error in NISP employed a variety methods for measuring 16

17 bone fragmentation (see Wolverton et al and references therein; also see Ugan 2005). However, the justification for the use of these methods typically relies more on logical argument than on direct empirical evaluation. Direct empirical evaluation can be carried out for certain potential fragmentation measures using data from the Bone Crusher experiment. The variables considered here consist of three that have been used as fragmentation measures in previous zooarchaeological research, and one that has not to my knowledge been so used but that conceivably might be. They are the ratio of MNI to NISP, the ratio of the total number of recovered specimens to NISP, bone density, and average specimen size. Each of these is discussed next in turn. Data from the Bone Crusher experiment that are relevant to evaluating these potential fragmentation measures are presented in Table 4. MNI:NISP Ratio The first measure of fragmentation that can be evaluated using data from the Bone Crusher experiment is the ratio of MNI to NISP. This ratio or the fundamentally similar MNE:NISP ratio (or the inverse NISP:MNI ratio or NISP:MNE ratio) has been used as a measure of fragmentation in previous zooarchaeological applications focused on exploring carcass processing intensity (e.g., Wolverton 2002; Wolverton et al. 2008; Ugan 2005). The logic behind the use of this variable is essentially that NISP should increase with greater fragmentation, whereas MNI or MNE should not (see Marshall and Pilgram 1993). Thus, a negative relationship should be observed between MNI:NISP ratio and degree of bone fragmentation. As Wolverton (2002; Wolverton et al. 2008) points out, however, such a relationship should hold only up to a point; citing the observation of Marshall and Pilgram (1993) that NISP should eventually begin to decline at high rates of fragmentation, he notes that ratios like MNI:NISP should also eventually exhibit reversals at high rates of fragmentation. This 17

18 would limit the utility of ratios like MNI:NISP as fragmentation measures because it would mean that a single ratio value could indicate two very different levels of fragmentation. The Bone Crusher experiment provides empirical support for the suggestion that such ratios will, in fact, behave in this fashion (Table 4). Specifically, as NISP starts to decline at higher levels of fragmentation in the experimental results, MNI does not, and this causes the MNI:NISP ratio to reverse direction along with NISP, as is illustrated in Figure 7. This suggests that ratios like MNI:NISP do indeed provide an ambiguous measure of fragmentation. NRSP:NISP Ratio Noting that ratios such as MNI:NISP may vary with fragmentation in the problematic manner just demonstrated, Wolverton (2002; Wolverton et al. 2008) supplements the use of such ratios with another ratio used earlier by Grayson (1991) that he refers to as the ratio of the number of specimens (NSP) to NISP. This is simply the total number of specimens in an assemblage identified and unidentified divided by NISP (or, it is the inverse of the proportion of the specimens in an assemblage that are identifiable). Here, I evaluate the utility of this variable as a measure of bone fragmentation, though I note that, strictly speaking, the variable that is being used here is the number of recovered specimens, not the number of specimens that may actually have been produced (i.e., not the number of specimens as this term is defined in the model presented above). Obviously, many of the fragments into which a carcass was broken over the course of its taphonomic history may go unrecovered due to such factors as post-depositional attrition or archaeological recovery methods, and it is only the subset of the fragments that are actually recovered that can be used in a fragmentation measure. I refer to this subset here by the abbreviation NRSP, for number of recovered specimens. 18

19 Values of the NRSP:NISP ratio for the rounds of the Bone Crusher experiment are provided in Table 4. The NISP values that go into these ratios are simply the sum of the rabbit and cat NISP values listed in Table 3, while the NRSP values are the sum of these total NISP values plus the numbers of recovered unidentified specimens listed in Table 3. The response of the NRSP:NISP ratio observed in the experiment is illustrated in Figure 8, and it is apparent here that this variable can provide a valid measure of fragmentation because it increases in an unambiguous, roughly linear manner with increasing fragmentation, even beyond the point at which NISP begins to decline. There remains a drawback to the use of this variable, however, which is that in multi-taxa assemblages, it is not possible to calculate taxon-specific values for it. This is because the NRSP:NISP ratio incorporates unidentified specimens, and these obviously cannot be attributed to any individual taxon. Instead, it is possible to generate only an aggregate variable such as the one used here, which includes both rabbit and cat specimens. To derive taxon-specific values of the ratio, one might be tempted simply to assign unidentifiable specimens to taxa in proportion to the NISP values of those taxa in an assemblage, or to divide the NISP values of individual taxa by the total number of unidentified specimens in the assemblage, but either of these approaches would amount to assuming that fragmentation rates are constant across taxa and would defeat a main purpose of measuring fragmentation, which is to determine whether fragmentation rates vary among taxa. For this reason, the NRSP:NISP ratio (or its inverse, the proportion of identifiable specimens in an assemblage) is probably best used in a supporting role which is, in fact, how Wolverton (2002; Wolverton et al. 2008) uses it providing an indication of overall assemblage fragmentation that may be used in conjunction with other, taxon-specific measures. 19

20 Bone Density Another potential measure of fragmentation that can be evaluated here is bone density. (Bone density scan site proportion, which I have used previously as a fragmentation measure as in the analysis presented at the start of this paper is not evaluated here because the measure discussed next clearly superior to this in its ease of use.) I am not aware of any zooarchaeological application where analyses based on bone density have been used to measure fragmentation per se, but bone density is, of course, very commonly used as a general indicator of attrition in faunal assemblages (see overview in Lyman 1994). Because attrition may just be another way of saying loss of identifiable specimens, and because, as I argued earlier, loss of identifiable specimens should be related in a predictable manner to fragmentation, it is plausible that bone density data might provide a useful measure of fragmentation. More specifically, if the assumptions that underlie typical thinking about density-mediated attrition are correct, then it should be expected that denser portions of skeletal elements would be more likely to survive processes that produce fragmentation. In turn, if this were the case, the average density of the identifiable specimens within an assemblage would increase with greater fragmentation. While this seems reasonable as an expectation, it is not borne out empirically by the Bone Crusher experiment. Data relevant to this test are provided in Table 4 and graphed in Figure 9. The data in this table and figure reflect the mean volume density of all scan sites present on identified rabbit specimens (again, from 1/8 and larger screens) using the rabbit bone density measurements of Pavao and Stahl (1999). Scan site presence was recorded in this experiment in each of two different ways: in the first method a scan site was recorded as present if any portion of the bone at the location of that scan site was present, and in the second a scan site was recorded as present only if at least half of the circumference of the bone at that location was 20

21 present. With the first method, a scan site from a single individual bone can be counted more than once if that bone is broken such that the scan site occurs on two or more identifiable pieces of it (i.e., scan site counts can be inflated by fragmentation), whereas this cannot occur with the second. The two different methods were used to determine if one might better capture densitymediated processes than the other. However, it is not possible to tell whether one method works better than the other because, in fact, no matter which one is used, mean bone density does not vary with fragmentation in the expected manner that was discussed above. For the any portion present method, there is virtually no relationship between mean scan site density and fragmentation rate (Spearman s rho = , 2-tailed p = 0.888). For the > 1/2 present method, the relationship is negative (Spearman s rho = , 2-tailed p < 0.001), counter to the expectation that mean density should increase with increasing fragmentation. In other words, this latter method would seem to indicate that it is actually less dense elements or portions of elements that are more likely to be recovered and remain identifiable at higher levels of fragmentation, not more dense elements or portions thereof. Although analyses involving bone density certainly may be very important for many other purposes in zooarchaeology, it would appear based on the results observed here that this variable is not particularly useful as a measure of bone fragmentation. Depending on the method used to tabulate scan site presence, mean bone density either does not vary with fragmentation in a systematic way at all, or it varies in a way that runs counter to what should be observed if specimen survivorship (in the sense in which this term is used in the model described above) is density-mediated. I would also note that a broader implication of this experimental result may be that bone attrition (more generally defined) is not related to density in the manner one might 21

22 expect, at least when attrition occurs solely through mechanical crushing as was the case in this experiment; however, further exploration of that issue is beyond the scope of the present paper. Specimen Size A final potential useful measure of fragmentation that I consider here is specimen size. Specimen size is directly and obviously related to fragmentation because, as a given set of bones becomes broken up into more and more pieces, the average size of those pieces must necessarily decrease. However, a potentially serious practical drawback to using this variable as a measure of fragmentation is the time that it takes to record it using traditional methods (e.g., manually measuring individual specimens one at a time using calipers, a size template, etc.). Although specimen size often is measured manually in zooarchaeological analyses including for purposes of measuring bone fragmentation (e.g., Ugan 2005) time and budget constraints make it impractical to do this in some projects, and even when such constraints do not completely preclude manual measurement, the labor required invariably makes it very expensive. Fortunately, though, there is a method available for measuring specimen size that enables large numbers of bone fragments to be measured very quickly and cheaply, greatly easing the constraints on size measurement. This method involves the use of digital image analysis. A particular focus of the Bone Crusher experiment was to employ digital image analysis to track changes in specimen size across experimental rounds, and data from the experiment can therefore be used to evaluate whether specimen size and specifically specimen size as measured through digital image analysis provides a useful measure of bone fragmentation. To perform the digital image analysis, digital photographs of the experimental assemblage were taken at the end of every round of the experiment, and these were analyzed using ImageJ software, which is freely available online through the U.S. National Institutes of Health 22

23 ( Among many other things, this software package can automatically trace outlines of objects in a digital image, calculate the area within those outlines (calibrated based on some spatial reference in the photo), and output a file with size measurements for each object in the image (Figure 10). Arranging bones to take the photographs requires some time specimens must be laid out so that none are touching as does the image analysis process, but this is only a very small fraction of what would be necessary to manually take size measurements on individual specimens. Methods: At a more detailed level, the methods used for digital image analysis were as follows. After each round of bone crushing and identification, photographs were taken separately by taxon (rabbit, cat, and unidentified) and by screen size fraction (an exception was that taxonspecific photographs were not taken for whole bones at round 0 of the experiment because the utility of such photographs was not recognized until after round 1 crushing had occurred). This was done to enable calculation of taxon- and screen size-specific average size values. The size data reported here (Table 4) are for identified rabbit and cat specimens from 1/8 screen samples (i.e., from the 1/8 and 1/4 screen fractions combined). Although ImageJ will compute many shape- and size-related variables for objects in an image, simple specimen area was used for this analysis. Because area is a two-dimensional variable and bone fragments are three-dimensional, some error will be present in the size measurements that is related to variation in the orientation of individual objects: e.g., a bone that is long and flat will be measured as having a much smaller area if it is lying on its edge in a photograph than would be the case if it were lying flat. In an effort to randomize such error, the specimens in each sub-assemblage (i.e., from each taxon and screen size fraction) were photographed three times at each round of the experiment, and the 23

24 specimens were shuffled between photographs to vary their orientations. The data used here are based on averages of the three shuffles for each sub-assemblage. The photographs were taken with a Sony Cyber-shot 1.3 megapixel digital camera (though any basic digital camera will suffice). The camera was mounted to a copy stand for ease of use and consistency among photographs, and a black background was used for maximum contrast with the white bones. The photographs, which were taken in color and saved as.jpg files, were opened in ImageJ and analyzed using the software s Analyze Particles tool. Prior to analysis, the photographs were converted to black-and-white 8-bit image type within the software, which is a prerequisite for use of the Analyze Particles tool, and image threshold values were set to 100 (lower) and 255 (upper) to further maximize contrast. In addition, the distance scale was calibrated using the software s Set Scale tool and based on a 10-cm scale bar that was place in each photograph for distance calibration purposes (see Figure 10). Labels denoting the experimental round, screen size, taxon, and shuffle were also placed in each photograph for record-keeping purposes; these labels and the scale bar were excluded from the analyzed image using the software s Select function before running Analyze Particles. ImageJ outputs analysis results for an image as a text file that includes (among several other variables) an area measurement for each object in the image. The text files for each analyzed image were pasted into a single master spreadsheet for subsequent analysis. In addition, the Show Outlines option of the Analyze Particles tool was selected for the analysis of each image; this produced jpgs showing outlines of the objects in each image (see Figure 10B), which were saved, again for record-keeping purposes. Very small particles (even pieces of dust), which unavoidably appear in photographs, are measured by the software and included in the output, but 24

25 these are easily recognized by their extremely small area values and were deleted from the master data file for this analysis. Results: The results of the Bone Crusher experiment indicate that specimen size and specifically specimen area as measured through digital image analysis is a valid and useful measure of fragmentation. For both rabbit and cat specimens, size declines in a roughly linear manner with increasing fragmentation (Figure 11). The fact that the relationship is linear means that specimen size is more useful as a measure of fragmentation than MNI:NISP ratio, which can give ambiguous results due to non-linear responses to increasing fragmentation. In addition, the fact that it exhibits any sort of predictable relationship with fragmentation at all makes specimen size more useful than average bone density, which either does not vary systematically with fragmentation or varies in a manner that runs counter to what would be expected if specimen survivorship were density-mediated. And finally, unlike the NRSP:NISP ratio, it is possible to generate taxon-specific values for specimen size through digital image analysis, as long as specimens are grouped by taxon in the images used in the analysis. Discussion This paper has presented a formal model of the relationship between NISP and bone fragmentation that can enable a more detailed understanding of that relationship than previous, primarily verbal (though important) explorations of it are able to. It has also presented the results of an experiment that provide empirical support for the oft-discussed, but never tested, hypothesis that NISP values should first increase and then decrease with increases in fragmentation rate. The experimental results also suggest that NISP values may not be inflated by fragmentation as much as some might fear, perhaps only by a factor of about 2 or 3 relative to unfragmented, whole bone values. And finally, and perhaps most important, the experimental 25

26 results provide empirical evaluation of several potential measures of bone fragmentation, suggesting that specimen size which can be determined easily through digital image analysis is more useful than other variables that have been or that might be used as fragmentation measures. Neither MNI:NISP ratio or mean bone density appear to vary with fragmentation in ways that make them useful measures of it, and while NRSP:NISP ratio does, it is not possible to generate taxon-specific values of this ratio in multi-taxa assemblages. Specimen size, on the other hand, varies with fragmentation rate linearly, providing an unambiguous measure of it, and it can be calculated for individual taxa. Moreover, it can be measured fairly quickly and easily, even for large assemblages, through digital image analysis. There are, of course, other variables not considered here that might also provide useful measures of bone fragmentation. For example, average specimen weight has been used (e.g., Ugan 2005), and this variable should be related to fragmentation rate in a direct and unambiguous manner much as average specimen size is. I would note, however, that bone weight may be affected by post-depositional diagenesis, as well as by processes that result in density mediated attrition: assemblages that have experienced more density-mediated attrition will necessarily be biased towards greater average mass per specimen volume (i.e., size). Still, in cases where it can independently be demonstrated that diagenesis or density-mediated attrition have not substantially affected bone samples differentially, weight may provide a useful way of comparing the degree of fragmentation among those samples. There are also, of course, many types of research questions other than the kind on which I have focused here for which it is necessary to be able to measure bone fragmentation. Though this paper has focused on the problem of fragmentation-related error in NISP-based analyses of taxonomic abundance, fragmentation is probably most often examined within zooarchaeology in 26

27 the context of questions about carcass-processing intensity, as I noted above (e.g., Wolverton et al. 2008; Ugan 2005). In analyses focused on questions of that sort, fragmentation does not constitute a source of error, but is instead the variable of direct interest. The empirical evaluation of fragmentation measures presented here should be just as useful for bone processing studies as it is for NISP-based taxonomic abundance studies. And finally, how, specifically, should a measure of fragmentation be used to control for the effects of this variable in NISP-based analyses of taxonomic abundance? Simply put, all that needs to be done is to show that fragmentation does not vary among samples in a manner that might confound conclusions about taxonomic abundance. In the Mimbres Valley example discussed at the start of this paper, it appears that fragmentation in that case measured by mean scan site proportions does vary in a way that could provide an alternative explanation to the artiodactyl resource depression-population rebound hypothesis for the observed pattern in the Artiodactyl Index. This is cause for concern (though it also certainly raises interesting new questions regarding the cause of the differences in fragmentation) and the problem would not even have been recognized had a fragmentation measure not been applied. If, on the other hand, it were the case that fragmentation did not vary substantially among those samples, then there would not be cause for concern and it would be possible to place greater confidence in the resource depression-population rebound explanation but the fragmentation analysis would have to actually be conducted in order for that confidence to be earned. And because it is possible to quantify fragmentation directly and efficiently through digital image-based measurement of specimen size, valid reasons for not taking fragmentation into account begin to disappear. There are, of course, complications to be dealt with in such an approach. In particular, if average specimen size is the fragmentation measure that is used, then the average size of whole 27

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

Math 1B, lecture 5: area and volume

Math 1B, lecture 5: area and volume Math B, lecture 5: area and volume Nathan Pflueger 6 September 2 Introduction This lecture and the next will be concerned with the computation of areas of regions in the plane, and volumes of regions in

More information

Unit 7: Normal Curves

Unit 7: Normal Curves Unit 7: Normal Curves Summary of Video Histograms of completely unrelated data often exhibit similar shapes. To focus on the overall shape of a distribution and to avoid being distracted by the irregularities

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4. Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than

More information

1.7 Graphs of Functions

1.7 Graphs of Functions 64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most

More information

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Section 14 Simple Linear Regression: Introduction to Least Squares Regression Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

More information

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the

More information

An introduction to Value-at-Risk Learning Curve September 2003

An introduction to Value-at-Risk Learning Curve September 2003 An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk

More information

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express

More information

Mathematics. What to expect Resources Study Strategies Helpful Preparation Tips Problem Solving Strategies and Hints Test taking strategies

Mathematics. What to expect Resources Study Strategies Helpful Preparation Tips Problem Solving Strategies and Hints Test taking strategies Mathematics Before reading this section, make sure you have read the appropriate description of the mathematics section test (computerized or paper) to understand what is expected of you in the mathematics

More information

CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

More information

Do Commodity Price Spikes Cause Long-Term Inflation?

Do Commodity Price Spikes Cause Long-Term Inflation? No. 11-1 Do Commodity Price Spikes Cause Long-Term Inflation? Geoffrey M.B. Tootell Abstract: This public policy brief examines the relationship between trend inflation and commodity price increases and

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Simple Regression Theory II 2010 Samuel L. Baker

Simple Regression Theory II 2010 Samuel L. Baker SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

More information

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:

More information

Common Core State Standards for Mathematics Accelerated 7th Grade

Common Core State Standards for Mathematics Accelerated 7th Grade A Correlation of 2013 To the to the Introduction This document demonstrates how Mathematics Accelerated Grade 7, 2013, meets the. Correlation references are to the pages within the Student Edition. Meeting

More information

Pre-Algebra 2008. Academic Content Standards Grade Eight Ohio. Number, Number Sense and Operations Standard. Number and Number Systems

Pre-Algebra 2008. Academic Content Standards Grade Eight Ohio. Number, Number Sense and Operations Standard. Number and Number Systems Academic Content Standards Grade Eight Ohio Pre-Algebra 2008 STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express large numbers and small

More information

Common sense, and the model that we have used, suggest that an increase in p means a decrease in demand, but this is not the only possibility.

Common sense, and the model that we have used, suggest that an increase in p means a decrease in demand, but this is not the only possibility. Lecture 6: Income and Substitution E ects c 2009 Je rey A. Miron Outline 1. Introduction 2. The Substitution E ect 3. The Income E ect 4. The Sign of the Substitution E ect 5. The Total Change in Demand

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the

More information

Graphical Integration Exercises Part Four: Reverse Graphical Integration

Graphical Integration Exercises Part Four: Reverse Graphical Integration D-4603 1 Graphical Integration Exercises Part Four: Reverse Graphical Integration Prepared for the MIT System Dynamics in Education Project Under the Supervision of Dr. Jay W. Forrester by Laughton Stanley

More information

How do you compare numbers? On a number line, larger numbers are to the right and smaller numbers are to the left.

How do you compare numbers? On a number line, larger numbers are to the right and smaller numbers are to the left. The verbal answers to all of the following questions should be memorized before completion of pre-algebra. Answers that are not memorized will hinder your ability to succeed in algebra 1. Number Basics

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives 6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

More information

LOGIT AND PROBIT ANALYSIS

LOGIT AND PROBIT ANALYSIS LOGIT AND PROBIT ANALYSIS A.K. Vasisht I.A.S.R.I., Library Avenue, New Delhi 110 012 amitvasisht@iasri.res.in In dummy regression variable models, it is assumed implicitly that the dependent variable Y

More information

Language Modeling. Chapter 1. 1.1 Introduction

Language Modeling. Chapter 1. 1.1 Introduction Chapter 1 Language Modeling (Course notes for NLP by Michael Collins, Columbia University) 1.1 Introduction In this chapter we will consider the the problem of constructing a language model from a set

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 Proofs Intuitively, the concept of proof should already be familiar We all like to assert things, and few of us

More information

Introduction to Fixed Effects Methods

Introduction to Fixed Effects Methods Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed

More information

by Maria Heiden, Berenberg Bank

by Maria Heiden, Berenberg Bank Dynamic hedging of equity price risk with an equity protect overlay: reduce losses and exploit opportunities by Maria Heiden, Berenberg Bank As part of the distortions on the international stock markets

More information

Chapter 21: The Discounted Utility Model

Chapter 21: The Discounted Utility Model Chapter 21: The Discounted Utility Model 21.1: Introduction This is an important chapter in that it introduces, and explores the implications of, an empirically relevant utility function representing intertemporal

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target

More information

Fairfield Public Schools

Fairfield Public Schools Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity

More information

Chapter 3. The Concept of Elasticity and Consumer and Producer Surplus. Chapter Objectives. Chapter Outline

Chapter 3. The Concept of Elasticity and Consumer and Producer Surplus. Chapter Objectives. Chapter Outline Chapter 3 The Concept of Elasticity and Consumer and roducer Surplus Chapter Objectives After reading this chapter you should be able to Understand that elasticity, the responsiveness of quantity to changes

More information

Problem of the Month: Fair Games

Problem of the Month: Fair Games Problem of the Month: The Problems of the Month (POM) are used in a variety of ways to promote problem solving and to foster the first standard of mathematical practice from the Common Core State Standards:

More information

Means, standard deviations and. and standard errors

Means, standard deviations and. and standard errors CHAPTER 4 Means, standard deviations and standard errors 4.1 Introduction Change of units 4.2 Mean, median and mode Coefficient of variation 4.3 Measures of variation 4.4 Calculating the mean and standard

More information

Follow links Class Use and other Permissions. For more information, send email to: permissions@pupress.princeton.edu

Follow links Class Use and other Permissions. For more information, send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: David A. Kendrick, P. Ruben Mercado, and Hans M. Amman: Computational Economics is published by Princeton University Press and copyrighted, 2006, by Princeton University Press. All rights

More information

Hedge Effectiveness Testing

Hedge Effectiveness Testing Hedge Effectiveness Testing Using Regression Analysis Ira G. Kawaller, Ph.D. Kawaller & Company, LLC Reva B. Steinberg BDO Seidman LLP When companies use derivative instruments to hedge economic exposures,

More information

The Financial Crisis: Did the Market Go To 1? and Implications for Asset Allocation

The Financial Crisis: Did the Market Go To 1? and Implications for Asset Allocation The Financial Crisis: Did the Market Go To 1? and Implications for Asset Allocation Jeffry Haber Iona College Andrew Braunstein (contact author) Iona College Abstract: Investment professionals continually

More information

Current California Math Standards Balanced Equations

Current California Math Standards Balanced Equations Balanced Equations Current California Math Standards Balanced Equations Grade Three Number Sense 1.0 Students understand the place value of whole numbers: 1.1 Count, read, and write whole numbers to 10,000.

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint. Lecture 2b: Utility c 2008 Je rey A. Miron Outline: 1. Introduction 2. Utility: A De nition 3. Monotonic Transformations 4. Cardinal Utility 5. Constructing a Utility Function 6. Examples of Utility Functions

More information

Field-Effect (FET) transistors

Field-Effect (FET) transistors Field-Effect (FET) transistors References: Hayes & Horowitz (pp 142-162 and 244-266), Rizzoni (chapters 8 & 9) In a field-effect transistor (FET), the width of a conducting channel in a semiconductor and,

More information

Chapter 11 Number Theory

Chapter 11 Number Theory Chapter 11 Number Theory Number theory is one of the oldest branches of mathematics. For many years people who studied number theory delighted in its pure nature because there were few practical applications

More information

Chemistry 111 Laboratory Experiment 7: Determination of Reaction Stoichiometry and Chemical Equilibrium

Chemistry 111 Laboratory Experiment 7: Determination of Reaction Stoichiometry and Chemical Equilibrium Chemistry 111 Laboratory Experiment 7: Determination of Reaction Stoichiometry and Chemical Equilibrium Introduction The word equilibrium suggests balance or stability. The fact that a chemical reaction

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

Chapter 27: Taxation. 27.1: Introduction. 27.2: The Two Prices with a Tax. 27.2: The Pre-Tax Position

Chapter 27: Taxation. 27.1: Introduction. 27.2: The Two Prices with a Tax. 27.2: The Pre-Tax Position Chapter 27: Taxation 27.1: Introduction We consider the effect of taxation on some good on the market for that good. We ask the questions: who pays the tax? what effect does it have on the equilibrium

More information

Time Series Forecasting Techniques

Time Series Forecasting Techniques 03-Mentzer (Sales).qxd 11/2/2004 11:33 AM Page 73 3 Time Series Forecasting Techniques Back in the 1970s, we were working with a company in the major home appliance industry. In an interview, the person

More information

Sample Size and Power in Clinical Trials

Sample Size and Power in Clinical Trials Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance

More information

5.1 Radical Notation and Rational Exponents

5.1 Radical Notation and Rational Exponents Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots

More information

Section 1.4. Difference Equations

Section 1.4. Difference Equations Difference Equations to Differential Equations Section 1.4 Difference Equations At this point almost all of our sequences have had explicit formulas for their terms. That is, we have looked mainly at sequences

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

8. Average product reaches a maximum when labor equals A) 100 B) 200 C) 300 D) 400

8. Average product reaches a maximum when labor equals A) 100 B) 200 C) 300 D) 400 Ch. 6 1. The production function represents A) the quantity of inputs necessary to produce a given level of output. B) the various recipes for producing a given level of output. C) the minimum amounts

More information

Chapter 5: Working with contours

Chapter 5: Working with contours Introduction Contoured topographic maps contain a vast amount of information about the three-dimensional geometry of the land surface and the purpose of this chapter is to consider some of the ways in

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Internal Quality Assurance Arrangements

Internal Quality Assurance Arrangements National Commission for Academic Accreditation & Assessment Handbook for Quality Assurance and Accreditation in Saudi Arabia PART 2 Internal Quality Assurance Arrangements Version 2.0 Internal Quality

More information

HISTOGRAMS, CUMULATIVE FREQUENCY AND BOX PLOTS

HISTOGRAMS, CUMULATIVE FREQUENCY AND BOX PLOTS Mathematics Revision Guides Histograms, Cumulative Frequency and Box Plots Page 1 of 25 M.K. HOME TUITION Mathematics Revision Guides Level: GCSE Higher Tier HISTOGRAMS, CUMULATIVE FREQUENCY AND BOX PLOTS

More information

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple

More information

Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions

Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions Grade 7 Mathematics, Quarter 1, Unit 1.1 Integer Operations Overview Number of Instructional Days: 15 (1 day = 45 minutes) Content to Be Learned Describe situations in which opposites combine to make zero.

More information

Inflation. Chapter 8. 8.1 Money Supply and Demand

Inflation. Chapter 8. 8.1 Money Supply and Demand Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that

More information

The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game. Robert K. Moniot September 5, 2003 The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Scaling and Biasing Analog Signals

Scaling and Biasing Analog Signals Scaling and Biasing Analog Signals November 2007 Introduction Scaling and biasing the range and offset of analog signals is a useful skill for working with a variety of electronics. Not only can it interface

More information

Charlesworth School Year Group Maths Targets

Charlesworth School Year Group Maths Targets Charlesworth School Year Group Maths Targets Year One Maths Target Sheet Key Statement KS1 Maths Targets (Expected) These skills must be secure to move beyond expected. I can compare, describe and solve

More information

NEW MEXICO Grade 6 MATHEMATICS STANDARDS

NEW MEXICO Grade 6 MATHEMATICS STANDARDS PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical

More information

Bayesian probability theory

Bayesian probability theory Bayesian probability theory Bruno A. Olshausen arch 1, 2004 Abstract Bayesian probability theory provides a mathematical framework for peforming inference, or reasoning, using probability. The foundations

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

Problem of the Month: Cutting a Cube

Problem of the Month: Cutting a Cube Problem of the Month: The Problems of the Month (POM) are used in a variety of ways to promote problem solving and to foster the first standard of mathematical practice from the Common Core State Standards:

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

25 Integers: Addition and Subtraction

25 Integers: Addition and Subtraction 25 Integers: Addition and Subtraction Whole numbers and their operations were developed as a direct result of people s need to count. But nowadays many quantitative needs aside from counting require numbers

More information

Descriptive Statistics and Measurement Scales

Descriptive Statistics and Measurement Scales Descriptive Statistics 1 Descriptive Statistics and Measurement Scales Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample

More information

A GUIDE TO LABORATORY REPORT WRITING ILLINOIS INSTITUTE OF TECHNOLOGY THE COLLEGE WRITING PROGRAM

A GUIDE TO LABORATORY REPORT WRITING ILLINOIS INSTITUTE OF TECHNOLOGY THE COLLEGE WRITING PROGRAM AT THE ILLINOIS INSTITUTE OF TECHNOLOGY THE COLLEGE WRITING PROGRAM www.iit.edu/~writer writer@charlie.cns.iit.edu FALL 1999 Table of Contents Table of Contents... 2 Introduction... 3 Need for Report Writing...

More information

Canonical Correlation Analysis

Canonical Correlation Analysis Canonical Correlation Analysis LEARNING OBJECTIVES Upon completing this chapter, you should be able to do the following: State the similarities and differences between multiple regression, factor analysis,

More information

3. Logical Reasoning in Mathematics

3. Logical Reasoning in Mathematics 3. Logical Reasoning in Mathematics Many state standards emphasize the importance of reasoning. We agree disciplined mathematical reasoning is crucial to understanding and to properly using mathematics.

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2008 AP Calculus AB and Calculus BC Free-Response Questions The following comments on the 2008 free-response questions for AP Calculus AB and Calculus BC were written by the Chief

More information

Minnesota Academic Standards

Minnesota Academic Standards A Correlation of to the Minnesota Academic Standards Grades K-6 G/M-204 Introduction This document demonstrates the high degree of success students will achieve when using Scott Foresman Addison Wesley

More information

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

More information

Characterizing Digital Cameras with the Photon Transfer Curve

Characterizing Digital Cameras with the Photon Transfer Curve Characterizing Digital Cameras with the Photon Transfer Curve By: David Gardner Summit Imaging (All rights reserved) Introduction Purchasing a camera for high performance imaging applications is frequently

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Commercial leases and insurance claims

Commercial leases and insurance claims Commercial leases and insurance claims by the CILA Property Special Interest Group 31st May 2016 Introduction This paper is intended as a guidance document to understanding commercial leases, particularly

More information

Comparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples

Comparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples Comparing Two Groups Chapter 7 describes two ways to compare two populations on the basis of independent samples: a confidence interval for the difference in population means and a hypothesis test. The

More information

Prime Factorization 0.1. Overcoming Math Anxiety

Prime Factorization 0.1. Overcoming Math Anxiety 0.1 Prime Factorization 0.1 OBJECTIVES 1. Find the factors of a natural number 2. Determine whether a number is prime, composite, or neither 3. Find the prime factorization for a number 4. Find the GCF

More information

Lectures, 2 ECONOMIES OF SCALE

Lectures, 2 ECONOMIES OF SCALE Lectures, 2 ECONOMIES OF SCALE I. Alternatives to Comparative Advantage Economies of Scale The fact that the largest share of world trade consists of the exchange of similar (manufactured) goods between

More information

Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass

Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass Centre of Mass A central theme in mathematical modelling is that of reducing complex problems to simpler, and hopefully, equivalent problems for which mathematical analysis is possible. The concept of

More information

Standards for Mathematical Practice: Commentary and Elaborations for 6 8

Standards for Mathematical Practice: Commentary and Elaborations for 6 8 Standards for Mathematical Practice: Commentary and Elaborations for 6 8 c Illustrative Mathematics 6 May 2014 Suggested citation: Illustrative Mathematics. (2014, May 6). Standards for Mathematical Practice:

More information

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.

More information

Assessment Anchors and Eligible Content

Assessment Anchors and Eligible Content M07.A-N The Number System M07.A-N.1 M07.A-N.1.1 DESCRIPTOR Assessment Anchors and Eligible Content Aligned to the Grade 7 Pennsylvania Core Standards Reporting Category Apply and extend previous understandings

More information

Permutation Tests for Comparing Two Populations

Permutation Tests for Comparing Two Populations Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

More information

G C.3 Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle.

G C.3 Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle. Performance Assessment Task Circle and Squares Grade 10 This task challenges a student to analyze characteristics of 2 dimensional shapes to develop mathematical arguments about geometric relationships.

More information

Figure 1. A typical Laboratory Thermometer graduated in C.

Figure 1. A typical Laboratory Thermometer graduated in C. SIGNIFICANT FIGURES, EXPONENTS, AND SCIENTIFIC NOTATION 2004, 1990 by David A. Katz. All rights reserved. Permission for classroom use as long as the original copyright is included. 1. SIGNIFICANT FIGURES

More information

https://williamshartunionca.springboardonline.org/ebook/book/27e8f1b87a1c4555a1212b...

https://williamshartunionca.springboardonline.org/ebook/book/27e8f1b87a1c4555a1212b... of 19 9/2/2014 12:09 PM Answers Teacher Copy Plan Pacing: 1 class period Chunking the Lesson Example A #1 Example B Example C #2 Check Your Understanding Lesson Practice Teach Bell-Ringer Activity Students

More information

Measurement with Ratios

Measurement with Ratios Grade 6 Mathematics, Quarter 2, Unit 2.1 Measurement with Ratios Overview Number of instructional days: 15 (1 day = 45 minutes) Content to be learned Use ratio reasoning to solve real-world and mathematical

More information

Conn Valuation Services Ltd.

Conn Valuation Services Ltd. CAPITALIZED EARNINGS VS. DISCOUNTED CASH FLOW: Which is the more accurate business valuation tool? By Richard R. Conn CMA, MBA, CPA, ABV, ERP Is the capitalized earnings 1 method or discounted cash flow

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

8 Divisibility and prime numbers

8 Divisibility and prime numbers 8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

More information

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1 Hypothesis testing So far, we ve talked about inference from the point of estimation. We ve tried to answer questions like What is a good estimate for a typical value? or How much variability is there

More information