Average-Case Analysis of a Nearest Neighbor Algorithm. Moett Field, CA USA. in accuracy. neighbor.
|
|
- Garry Hines
- 7 years ago
- Views:
Transcription
1 From Proceeings of the Thirteenth International Joint Conference on Articial Intelligence (1993). Chambery, France: Morgan Kaufmann Average-Case Analysis of a Nearest Neighbor Algorithm Pat Langley Learning Systems Department Siemens Corporate Research 755 College Roa East Princeton, NJ USA Wayne Iba AI Research Branch Mail Stop NASA Ames Research Center Moett Fiel, CA USA Abstract In this paper we present an average-case analysis of the nearest neighbor algorithm, a simple inuction metho that has been stuie by many researchers. Our analysis assumes a conjunctive target concept, noise-free Boolean attributes, an a uniform istribution over the instance space. We calculate the probability that the algorithm will encounter a test instance that is istance from the prototype of the concept, along with the probability that the nearest store training case is istance e from this test instance. From this we compute the probability of correct classication as a function of the number of observe training cases, the number of relevant attributes, an the number of irrelevant attributes. We also explore the behavioral implications of the analysis by presenting preicte learning curves for articial omains, an give experimental results on these omains as a check on our reasoning. 1. Nearest Neighbor Algorithms Most learning methos form some abstraction from experience an store this structure in memory. The el has explore a wie range of such structures, incluing ecision trees (Quinlan, 1986), multilayer networks (Rumelhart & McClellan, 1986), an probabilistic summaries (Fisher, 1987). However, in recent years there has been growing interest in methos that store instances or cases in memory, an that apply this specic knowlege irectly to new situations. This approach goes by many names, incluing instance-base learning an case-base reasoning, an one can apply it to many ierent tasks. The simplest an most wiely stuie class of techniques, often calle nearest neighbor algorithms, originate in the el of pattern recognition (Cover & Hart, 1967; Dasarathy, 1991) an applies to classication tasks. In the basic metho, learning appears almost trivial { one simply stores each training instance in memory. The power of the metho comes from the retrieval process. Given a new test instance, one ns the store training case that is nearest accoring to some istance measure, notes the class of the retrieve case, an preicts the new instance will have the same class. Many variants exist on this basic algorithm. For instance, Stanll an Waltz (1986) have stuie a version that retrieves the k closest instances an bases preictions on a weighte vote, incorporating the istance of each store instance from the test case; such techniques are often referre to as k-nearest neighbor algorithms. Others (Cover & Hart, 1967; Aha, Kibler, & Albert, 1991) have stuie an alternative approach that stores cases in memory only upon making an error, thus reucing memory loa an retrieval time with little reuction in accuracy. We woul like to unerstan the learning behavior of this intriguing class of methos uner various conitions. Aha et al. present a PAC analysis of one such algorithm, but our aim is to obtain tighter bouns that we can irectly relate to experimental results. To this en, we ecie to pursue an average-case analysis along the lines evelope by Hirschberg an Pazzani (1991) for logical inuction methos an by Langley, Iba, an Thompson (1992) for probabilistic ones. For the sake of tractability, we focuse our eorts on the most basic of the instance-base techniques, which stores all training cases an bases its preiction on the single nearest neighbor. However, the simplicity of this metho oes not mean it lacks power. Aha et al. (1991) report the results of an experimental stuy that compare the algorithm (which they calle IB1 ) to Quinlan's (1986) more sophisticate C4 algorithm for inucing ecision trees. Table 1 contains the results on four natural omains, two of them (\Clevelan" an \Hungarian") involving preiction of heart isease from symptoms, another concerning the iagnosis of primary tumors, an a fourth involving preiction of party aliations for members of Congress from their voting recors. For each omain, Aha et al. traine the algorithms on approximately 80% of the cases an teste them on the remaining instances, averaging over 50 ierent partitions. On the Clevelan ata, the two algorithms' performance was inistinguishable, an IB1's behavior on the tumor an voting recors nearly reache C4's level. Although the basic nearest neighbor algorithm fare much worse on the Hungarian ata set, simple moications prouce accuracy comparable to that for C4 (Aha, 1990), an its performance on the other omains argues that it eserves closer inspection in any case. In the remainer of this paper, we report the initial results of our average-case analysis of the simple nearest neighbor metho. We begin by presenting the as-
2 Analysis of a Nearest Neighbor Algorithm 890 Table 1. Percentage accuracies for a nearest neighbor metho (IB1) an a ecision-tree algorithm (C4) on four classication omains, taken from Aha et al. (1991). Each column reports both average accuracy of classication an stanar error. Domain IB1 C4 Clevelan Hungarian Primary tumor Voting recors sumptions of the analysis, followe by our erivation of the equations for preicting accuracy as a function of three variables. After this, we examine the implications of the analysis for the algorithm's behavior, comparing preicte learning curves for articial omains of iering iculty. Finally, we note some limitations of the analysis an suggest irections for future research. 2. An Average-Case Analysis Recall that the algorithm uner stuy stores all training cases in memory. Upon encountering a test instance, it retrieves the nearest observe case an preicts the same class as that for the store instance. If a tie occurs, the version we will examine selects one of the nearest cases at ranom. We woul like to compute A n, the probability of correct classication (i.e., the preictive accuracy) after n training instances as a function of characteristics of the omain. Our analysis will assume that there exist two classes, C an C, ene over r relevant Boolean attributes A j an i irrelevant ones. We will also assume that the probability of occurrence P (A j ) = 1=2 for each such attribute A j, generating a uniform istribution over the instance space, an that the target concept is conjunctive, giving P (C) = P (A) r = 1=2 r. Finally, we will measure the istance between two cases as the number of attributes on which they ier. Because there are r + i attributes, there are exactly r+i+1 istinct istances, ranging from zero to r + i, that can occur. We will use I to enote an arbitrary test case that is istance from the prototype P for the positive class. We will also refer to J e as an arbitrary training case that is istance e from test instance I. We will often treat groups of test cases as a class base on their istance from the prototype; similarly, we will consier groups of training instances base on their istance e from a given class of test cases. Our strategy consiers positive an negative test instances separately. We will use A(C) n to refer to the probability of correct classication given that the algorithm encounters a positive test case after n training cases, an we will use A( C)n for the preictive accuracy given a negative test instance. The overall accuracy of the nearest neighbor algorithm after n training cases is A n = P (C)A(C) n + P ( C)A( C)n ; (1) where P (C) is the probability of a positive test case an P ( C) = 1? P (C) is the probability of a negative one. Let us eal with the accuracy on negative test cases rst. We can view this term as a weighte sum of the accuracies for ierent types of test cases I, where is the istance from the prototype. 1 We must sum over all possible istances from the prototype at which a test case can occur (except for = 0), multiplying the probability of each type by its accuracy B( C) ;n accoring to the possible contents of memory after n training instances: A( C)n = Xr+i =1 where the accuracy component? 2 r+i? 2 i B( C);n ; (2) B( C);n = N(J?i?1) n +T (J?i;+i)? n +F (J +i+1 ) n (3) can be further ivie into three terms, which are relate to the regions shown in Figure 1. These correspon, respectively, to situations in which the closest store training case is near enough to I to ensure correct classication, in which ties can occur because the closest case may belong to either class, an in which the closest instance is far enough from I to ensure correct preiction. The initial term, N(J?i?1) n, represents the contribution to the accuracy that results from the rst of these situations, when the nearest store training case J e is closer to I than the latter is to the nearest positive instance (i.e., 0 e < ). When this occurs, the algorithm will correctly classify I as negative. To see this, consier the innermost region in Figure 1, an note that the algorithm must observe only one training case within the region for this to transpire. The probability that any given training instance will fall within e or fewer steps of the test case is W (J e ) = 1 ex r + i ; (4) 2 r+i j j=0 giving the probability 1?W (J e ) that this will not occur. Thus, the probability that (after n training instances) the algorithm will have seen at least one such case is N(J e ) n = 1? [1? W (J e )] n ; (5) which gives the rst term in the enition of B( C);n in equation (3). 1. When some features are irrelevant, there are multiple positive instances, any of which we can select as the prototype without loss of generality.
3 Analysis of a Nearest Neighbor Algorithm i r + i I i P We can further ecompose the rst term in the prouct into the probability that exactly k of the n training cases are istance e from the test case an that the remaining n? k are at some greater istance (since the k cases are the nearest ones), giving " # k n e M(J e ) n;k = [1 k 2 r+i? W (J e )] n?k (7) as the formal expression. To compute the accuracy given a tie among k store cases, we must sum over the possible numbers j of negative instances, in each case multiplying the subaccuracy by the probability of that occurrence. This gives E(J e )? k = kx j=0 j k k [1? V (C)? e; j ]j [V (C)? e; ]k?j ; (8) Figure 1. Regions of interest for computing the probability of correctly classifying a negative test case I, istance from prototype P, using the store training case J e, which is istance e from I. The inner region epicts the probability that e <, the outer region shows the probability that e > + i, an the central region inicates the probability that e + i. The secon term from this equation, T (J?i;+i)? n, is the contribution to the accuracy when the istance to the nearest store training instance J e is between the istances to the nearest possible positive instance an the farthest one (i.e.,?i e +i). This correspons to the central region in the gure. In this situation, con- icts can occur uring the classication process, in that the algorithm may retrieve both positive an negative training cases at istance e from the test case. Ties are possible in this region because, given i irrelevant attributes, there are 2 i positive instances that can be locate i steps or less away from the prototype. For any given test case I that is istance from the prototype, the nearest store positive case may be i steps away from the prototype in the irection towar I, i steps away from the prototype in the irection away from I, or somewhere between these two extremes. Negative instances can also occur anywhere within this region, making the entire mile ban in the gure open to the possibility of ties. To hanle all possible ties, we must sum over all istances e between an + i, then sum over the possible numbers k of nearest instances (positive or negative) that have been store at each such istance. In each case, we must multiply the probability M(J e ) n;k of that number occurring by the accuracy E(J e )? k that results from such a tie. We can state this formally as T (J a;b )? n = bx nx e=a k=1 M(J e ) n;k E(J e )? k : (6) where j=k is the expecte accuracy when one selects a training case at ranom from a set that contains j out of k negative instances. The term V (C)? e; represents the probability of a positive instance J e given that the instance is e steps away from negative test case I, which is in turn steps away from the prototype. We can expan this term to V (C)? min(r;) X e; = k=1? r i k??k? e?k e ; (9) provie k e, k r, an? k i, an to zero otherwise. This expression sums over ierent ways in which istance can occur, with some steps along relevant attributes an others along irrelevant ones, multiplie by the probability that an instance e steps away from the resulting test case will be positive. The nal term from equation (3), F (J +i+1 ) n, species the contribution to the accuracy when the nearest training instance J e has istance greater than + i from test case I (i.e., + i < e r + i), an thus falls within the outermost region in Figure 1, which contains only negative instances. As with the N(J?i?1) n term, the algorithm will correctly classify I as negative whenever this occurs. The chance that this situation will arise is F (I e ) n = Xr+i 2 r+i j=e r + i j 3 5n ; (10) which is the probability that any given training case will fall at istance e or greater from the test instance, taken to power n to generate the probability that every training instance seen so far satises this conition. Now we can turn to A(C) n, the accuracy on positive test cases after n training instances. The situation here is simpler than for negative test cases, but still nontrivial. The algorithm is guarantee to classify a positive test case I correctly only when the nearest store training instance is itself the test case (i.e., e = 0). Ties can occur anywhere in the range 1 e i, giving the expression
4 Analysis of a Nearest Neighbor Algorithm 892 j=0 A(C) n = N(J 0 ) n + T (J 1;i ) + n : (11) Because one can treat any positive instance as the prototype, there is no nee to sum over ierent istances here. Moreover, since no positive instance can be more than i steps away from any other, we can omit the thir term of equation (3), F (J i+1 ) n, which is always zero. The term for hanling ties is analogous to equation (6) for the negative situation, but we must revise the enition for E(J e )? k in equation (8) to kx E(J e ) + k = k? j k [1? V (C) + e ] j [V (C) + e ] k?j : (12) k j Note that here we must use the numerator k? j rather than k, in that we are ealing with positive test cases. Moreover, we must take a ierent approach to computing V (C) + e, the probability of a positive instance J e given that the instance is e steps away from positive test case I. In this case, we have V (C) + e = e e ; (13) when i e, but zero in other situations. Taken together, the enitions for A(C) n, A( C) n, an their component terms let us preict the overall accuracy A n for the nearest neighbor algorithm as a function of the number of training instances n, the number of relevant attributes r, an the number of irrelevant attributes i. 3. Behavioral Implications of the Analysis Although the equations in the previous sections provie a formal characterization of the nearest neighbor algorithm's behavior, their implications are not obvious. To better unerstan the eects of omain characteristics, we systematically varie certain omain parameters an examine the preicte results. In aition to computing theoretical preictions, we also collecte experimental learning curves that summarize the algorithm's actual behavior. Each atum on these curves reports the classication accuracy average over 100 runs on ranomly generate training sets, measure over the entire space of uniformly istribute noise-free instances. In each case, we boun the mean accuracy with 95% conence intervals to show the egree to which our preicte learning curves t the observe ones. These experimental results provie an important check on our reasoning, an they ientie a number of problems uring evelopment of the analysis. Figure 2 shows the eects of the number of relevant attributes in the conjunctive target concept. For this stuy, we hel the number of irrelevant attributes i constant at one, an we varie both the number of training instances an the number of relevant attributes r. As typical with learning curves, the accuracy starts low an graually improves as the algorithm encounters more training instances. The eects of target complexity also make sense. Increasing the number of relevant features Probability of correct classification relevant 2 relevants 4 relevants Number of training instances Figure 2. Preictive accuracy of the nearest neighbor algorithm on a conjunctive concept, assuming the presence of one irrelevant attribute, as a function of training instances an the number of relevant attributes. The lines represent theoretical learning curves, whereas the error bars inicate experimental results. shoul increase the overall number of negative instances, giving higher accuracy early in the inuction process; however, this factor also increases the total number of possible instances, requiring more training cases to reach asymptote an proucing a crossover eect. The learning rate seems to egrae gracefully with increasing complexity, an the theoretical an actual learning curves are in close agreement, which lens conence to the analysis. The sensitivity of the nearest neighbor algorithm to irrelevant attributes is more ramatic, as shown in Figure 3. This graph summarizes the results of a similar stuy of the interaction between the number of training instances n an the number of irrelevant attributes. Here we hel the number of relevant attributes constant at two, an we examine three levels of the i parameter. As with the previous stuy, the egraation in learning rate is graceful, but the eect is somewhat greater. The ierence between the two results appears more signicant when one realizes that increasing i oes not reuce the proportion of positive instances, as oes increasing the number of relevant attributes. These observations are consistent with Aha's (1990) reports on the sensitivity of nearest neighbor methos to the number of irrelevant attributes. We can also compare the behavior of the nearest neighbor algorithm to that of other inuction methos for which average-case analyses exist. In particular, Pazzani an Sarrett (1992) have stuie the Wholist algorithm, which initializes its concept escription to the conjunction of features in the rst positive training instance, then removes any feature that fails to occur in later positive instances. Similarly, Langley, Iba, an Thompson (1992) have analyze the behavior of the Bayesian classier, a simple probabilistic metho that stores observe
5 Analysis of a Nearest Neighbor Algorithm 893 Probability of correct classification irrelevant 3 irrelevants 5 irrelevants Training instances require Nearest neighbor Bayesian classifier Number of training instances Number of irrelevant attributes Figure 3. Preictive accuracy of the nearest neighbor algorithm on a conjunctive concept, assuming the presence of two relevant attributes, as a function of training instances an the number of irrelevant attributes. The lines represent theoretical learning curves, whereas the error bars inicate experimental results. base rates an conitional probabilities. As Pazzani an Sarrett note, Wholist's learning rate is unaecte by the number of relevant attributes, so their algorithm clearly scales up better on this imension than oes the nearest neighbor technique. Comparison to the Bayesian classier on this factor is more icult, in that Langley et al.'s stuy examine equal probabilities for the two classes, whereas the current analysis assumes that they ier. In some omains, eective learning relies more on the ability to hanle many irrelevant features than many relevant ones. In this vein, we have shown analytically that the number of training instances require for Wholist to achieve a given level of accuracy increases only with the logarithm of the number of irrelevant attributes. Although we have not yet erive similar analytic relations for the nearest neighbor or probabilistic methos, we can use the existing analyses to estimate ability to scale on this imension. Figure 4 graphs the preicte number of training instances neee to achieve 90% accuracy for each algorithm as a function of the number of irrelevant attributes, assuming a target concept involving only one relevant feature an a uniform istribution of instances. The analyses o not provie these quantities irectly, but one can interpolate them from the theoretical learning curves. The gure reveals that the Bayesian classier scales well to increasing numbers of irrelevant attributes, with the epenent measure growing as an approximate linear function of this factor. In contrast, the number of training instances require by the nearest neighbor metho grows much faster, although we cannot yet etermine the precise superlinear relation. These results are also consistent with Aha's (1990) conclusions about the response of stanar instance-base methos to many irrelevant features. Figure 4. Theoretical number of training instances (interpolate) require to reach 90% accuracy by a nearest neighbor algorithm an a simple Bayesian classier on a conjunctive concept, assuming the presence of one relevant attribute, as a function of the number of irrelevant attributes. However, the above comparisons are not entirely fair. Neither the Wholist algorithm nor the Bayesian classier are esigne to hanle isjunctive concepts, which present no obstacles to even the simplest nearest neighbor algorithm. Our focus on conjunctive concepts in the current analysis has obscure this strength. Also, Aha (1990) has evelope a variant of the nearest neighbor algorithm that retains statistics on the usefulness of each attribute, an he has shown that this approach fares better in omains with many irrelevant terms. Nevertheless, the ability to make comparisons of the above type is one avantage of careful formal analyses, an they have provie insights about the relative strengths of the ierent learning algorithms. 4. General Discussion In this paper we presente an average-case analysis of the most basic nearest neighbor algorithm. Our treatment assumes that the target concept is conjunctive, that instances are free of noise, that attributes are Boolean, an that instances are uniformly istribute. Given information about the number of relevant an irrelevant attributes, our equations let us compute the expecte classication accuracy after a given number of training instances. To explore the implications of the analysis, we plotte the preicte behavior of the algorithm as a function of these three factors, ning graceful egraation as the number of relevants r an irrelevants i increase, but ning a stronger eect for the secon. As a check on our analysis, we ran the algorithm on articial omains with the same characteristics. The preicte behavior closely t that foun in the experiments, but only after correcting several errors in our reasoning that the empirical stuies reveale.
6 Analysis of a Nearest Neighbor Algorithm 894 These results begin to account for the wie range of performance observe for the algorithm by Aha an others on natural omains. However, a full explanation will require several extensions to the analysis. In particular, we must incorporate the inuence of both class an attribute noise, as we have one in earlier analyses (Iba & Langley, 1992; Langley et al., 1992). We must also hanle situations in which each attribute follows a separate probability istribution, following the approach taken by Hirschberg an Pazzani (1991). Even more important, we must exten the framework to hanle broaer classes of target concepts. Nearest neighbor methos are well suite for M of N concepts, in which any M of the N features in the prototype are sucient for membership in the class. Since istance from the prototype plays a central role in the current analysis, we believe extening it to hanle such concepts will be quite feasible. Similarly, because the algorithm stores many training instances in memory, it can easily acquire isjunctive concepts that require multiple prototypes. Again, we hope that simple extensions to the existing framework will hanle this situation. We shoul also generalize the analysis to inclue k-nearest neighbor methos, following the lea recently provie by Turney (in press). Another irection for future work woul attempt to map the extene analysis onto natural omains in which there alreay exist experimental results with the metho. Given information about the istributions of attributes (which are available in the ata), along with estimates of the noise levels an target concepts (which require informe guesses), we can compare learning curves preicte by the theory with those observe in experimental runs. This approach woul exten the applicability of our average-case moel beyon the articial omains to which we have limite our tests to ate. In summary, we believe that our initial analysis has provie some useful insights about the behavior of the basic nearest neighbor algorithm. These begin to explain why the algorithm compares favorably with more complex inuction methos on some omains but not others, an our results are consistent with intuitions about the algorithm's sensitivity to irrelevant attributes. We also believe the existing theoretical framework can be extene to hanle more challenging target concepts an other factors that complicate the learning task, thus proviing a soli base on which to carry out further stuies of instance-base learning. Acknowlegements We woul like to thank Stephanie Sage an Davi Aha for iscussions that helpe clarify our ieas, Davi Aha an three reviewers for useful comments, an Nils Nilsson for his support an encouragement. The rst author contribute to this work while he was at the Center for the Stuy of Language an Information, Stanfor University, an the Institute for the Stuy of Learning an Expertise. References Aha, D. W. (1990). A stuy of instance-base algorithms for supervise learning tasks: Mathematical, empirical, an psychological evaluations. Doctoral issertation, Department of Information & Computer Science, University of California, Irvine. Aha, D. W., Kibler, D., & Albert, M. K. (1991). Instancebase learning algorithms. Machine Learning, 6, 37{ 66. Cover, T. M., & Hart, P. E. (1967). Nearest neighbor pattern classication. IEEE Transactions on Information Theory, 13, 21{27. Dasarathy, B. V. (E.). (1991). Nearest neighbor (NN) norms: NN pattern classication techniques. Los Alamitos, CA: IEEE Computer Society Press. Fisher, D. H. (1987). Knowlege acquisition via incremental conceptual clustering. Machine Learning, 2, 139{172. Hirschberg, D. S., & Pazzani, M. J. (1991). Averagecase analysis of a k-cnf learning algorithm (Technical Report 91{50). Irvine: University of California, Department of Information & Computer Science. Iba, W., & Langley, P. (1992). Inuction of one-level ecision trees. Proceeings of the Ninth International Conference on Machine Learning (pp. 233{240). Abereen, Scotlan: Morgan Kaufmann. Langley, P., Iba, W., & Thompson, K. (1992). An analysis of Bayesian classiers. Proceeings of the Tenth National Conference on Articial Intelligence (pp. 223{ 228). San Jose, CA: AAAI Press. Pazzani, M. J., & Sarrett, W. (1992). A framework for the average case analysis of conjunctive learning algorithms. Machine Learning, 9, 349{372. Quinlan, J. R. (1986). Inuction of ecision trees. Machine Learning, 1, 81{106. Rumelhart, D. E., & McClellan, J. L. (1986). On learning the past tenses of English verbs. In J. L. McClellan & D. E. Rumelhart (Es.), Parallel istribute processing: Explorations in the microstructure of cognition (Vol. 2). Cambrige, MA: MIT Press. Stanll, C., & Waltz, D. (1986). Towar memory-base reasoning. Communications of the ACM, 29, 1213{ Turney, P. (in press). Voting in instance-base learning: A theoretical analysis. Journal of Experimental an Theoretical Articial Intelligence.
10.2 Systems of Linear Equations: Matrices
SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix
More informationOn Adaboost and Optimal Betting Strategies
On Aaboost an Optimal Betting Strategies Pasquale Malacaria 1 an Fabrizio Smerali 1 1 School of Electronic Engineering an Computer Science, Queen Mary University of Lonon, Lonon, UK Abstract We explore
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms
More informationData Center Power System Reliability Beyond the 9 s: A Practical Approach
Data Center Power System Reliability Beyon the 9 s: A Practical Approach Bill Brown, P.E., Square D Critical Power Competency Center. Abstract Reliability has always been the focus of mission-critical
More informationMSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUM-LIKELIHOOD ESTIMATION
MAXIMUM-LIKELIHOOD ESTIMATION The General Theory of M-L Estimation In orer to erive an M-L estimator, we are boun to make an assumption about the functional form of the istribution which generates the
More informationFirewall Design: Consistency, Completeness, and Compactness
C IS COS YS TE MS Firewall Design: Consistency, Completeness, an Compactness Mohame G. Goua an Xiang-Yang Alex Liu Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188,
More informationCh 10. Arithmetic Average Options and Asian Opitons
Ch 10. Arithmetic Average Options an Asian Opitons I. Asian Option an the Analytic Pricing Formula II. Binomial Tree Moel to Price Average Options III. Combination of Arithmetic Average an Reset Options
More informationMath 230.01, Fall 2012: HW 1 Solutions
Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The
More informationSensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm
Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh an Anrea Montanari Electrical Engineering an Statistics Department Stanfor University, Stanfor,
More informationJON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT
OPTIMAL INSURANCE COVERAGE UNDER BONUS-MALUS CONTRACTS BY JON HOLTAN if P&C Insurance Lt., Oslo, Norway ABSTRACT The paper analyses the questions: Shoul or shoul not an iniviual buy insurance? An if so,
More informationINFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES
1 st Logistics International Conference Belgrae, Serbia 28-30 November 2013 INFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES Goran N. Raoičić * University of Niš, Faculty of Mechanical
More informationRural Development Tools: What Are They and Where Do You Use Them?
Faculty Paper Series Faculty Paper 00-09 June, 2000 Rural Development Tools: What Are They an Where Do You Use Them? By Dennis U. Fisher Professor an Extension Economist -fisher@tamu.eu Juith I. Stallmann
More informationDIFFRACTION AND INTERFERENCE
DIFFRACTION AND INTERFERENCE In this experiment you will emonstrate the wave nature of light by investigating how it bens aroun eges an how it interferes constructively an estructively. You will observe
More informationA Generalization of Sauer s Lemma to Classes of Large-Margin Functions
A Generalization of Sauer s Lemma to Classes of Large-Margin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/
More informationEnterprise Resource Planning
Enterprise Resource Planning MPC 6 th Eition Chapter 1a McGraw-Hill/Irwin Copyright 2011 by The McGraw-Hill Companies, Inc. All rights reserve. Enterprise Resource Planning A comprehensive software approach
More informationRisk Management for Derivatives
Risk Management or Derivatives he Greeks are coming the Greeks are coming! Managing risk is important to a large number o iniviuals an institutions he most unamental aspect o business is a process where
More informationView Synthesis by Image Mapping and Interpolation
View Synthesis by Image Mapping an Interpolation Farris J. Halim Jesse S. Jin, School of Computer Science & Engineering, University of New South Wales Syney, NSW 05, Australia Basser epartment of Computer
More informationRules for Finding Derivatives
3 Rules for Fining Derivatives It is teious to compute a limit every time we nee to know the erivative of a function. Fortunately, we can evelop a small collection of examples an rules that allow us to
More informationMODELLING OF TWO STRATEGIES IN INVENTORY CONTROL SYSTEM WITH RANDOM LEAD TIME AND DEMAND
art I. robobabilystic Moels Computer Moelling an New echnologies 27 Vol. No. 2-3 ransport an elecommunication Institute omonosova iga V-9 atvia MOEING OF WO AEGIE IN INVENOY CONO YEM WIH ANOM EA IME AN
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying
More informationDigital barrier option contract with exponential random time
IMA Journal of Applie Mathematics Avance Access publishe June 9, IMA Journal of Applie Mathematics ) Page of 9 oi:.93/imamat/hxs3 Digital barrier option contract with exponential ranom time Doobae Jun
More informationSection 3.3. Differentiation of Polynomials and Rational Functions. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section 3.3 Differentiation of Polynomials an Rational Functions In tis section we begin te task of iscovering rules for ifferentiating various classes of
More informationMathematics Review for Economists
Mathematics Review for Economists by John E. Floy University of Toronto May 9, 2013 This ocument presents a review of very basic mathematics for use by stuents who plan to stuy economics in grauate school
More informationHow To Segmentate An Insurance Customer In An Insurance Business
International Journal of Database Theory an Application, pp.25-36 http://x.oi.org/10.14257/ijta.2014.7.1.03 A Case Stuy of Applying SOM in Market Segmentation of Automobile Insurance Customers Vahi Golmah
More informationA Comparison of Performance Measures for Online Algorithms
A Comparison of Performance Measures for Online Algorithms Joan Boyar 1, Sany Irani 2, an Kim S. Larsen 1 1 Department of Mathematics an Computer Science, University of Southern Denmark, Campusvej 55,
More informationStock Market Value Prediction Using Neural Networks
Stock Market Value Preiction Using Neural Networks Mahi Pakaman Naeini IT & Computer Engineering Department Islamic Aza University Paran Branch e-mail: m.pakaman@ece.ut.ac.ir Hamireza Taremian Engineering
More informationThe one-year non-life insurance risk
The one-year non-life insurance risk Ohlsson, Esbjörn & Lauzeningks, Jan Abstract With few exceptions, the literature on non-life insurance reserve risk has been evote to the ultimo risk, the risk in the
More informationDouble Integrals in Polar Coordinates
Double Integrals in Polar Coorinates Part : The Area Di erential in Polar Coorinates We can also aly the change of variable formula to the olar coorinate transformation x = r cos () ; y = r sin () However,
More informationMathematics. Circles. hsn.uk.net. Higher. Contents. Circles 119 HSN22400
hsn.uk.net Higher Mathematics UNIT OUTCOME 4 Circles Contents Circles 119 1 Representing a Circle 119 Testing a Point 10 3 The General Equation of a Circle 10 4 Intersection of a Line an a Circle 1 5 Tangents
More informationFactoring Dickson polynomials over finite fields
Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University
More informationCalculating Viscous Flow: Velocity Profiles in Rivers and Pipes
previous inex next Calculating Viscous Flow: Velocity Profiles in Rivers an Pipes Michael Fowler, UVa 9/8/1 Introuction In this lecture, we ll erive the velocity istribution for two examples of laminar
More informationIntroduction to Integration Part 1: Anti-Differentiation
Mathematics Learning Centre Introuction to Integration Part : Anti-Differentiation Mary Barnes c 999 University of Syney Contents For Reference. Table of erivatives......2 New notation.... 2 Introuction
More informationModelling and Resolving Software Dependencies
June 15, 2005 Abstract Many Linux istributions an other moern operating systems feature the explicit eclaration of (often complex) epenency relationships between the pieces of software
More information15.2. First-Order Linear Differential Equations. First-Order Linear Differential Equations Bernoulli Equations Applications
00 CHAPTER 5 Differential Equations SECTION 5. First-Orer Linear Differential Equations First-Orer Linear Differential Equations Bernoulli Equations Applications First-Orer Linear Differential Equations
More informationWeb Appendices to Selling to Overcon dent Consumers
Web Appenices to Selling to Overcon ent Consumers Michael D. Grubb MIT Sloan School of Management Cambrige, MA 02142 mgrubbmit.eu www.mit.eu/~mgrubb May 2, 2008 B Option Pricing Intuition This appenix
More informationy or f (x) to determine their nature.
Level C5 of challenge: D C5 Fining stationar points of cubic functions functions Mathematical goals Starting points Materials require Time neee To enable learners to: fin the stationar points of a cubic
More information11 CHAPTER 11: FOOTINGS
CHAPTER ELEVEN FOOTINGS 1 11 CHAPTER 11: FOOTINGS 11.1 Introuction Footings are structural elements that transmit column or wall loas to the unerlying soil below the structure. Footings are esigne to transmit
More informationA New Evaluation Measure for Information Retrieval Systems
A New Evaluation Measure for Information Retrieval Systems Martin Mehlitz martin.mehlitz@ai-labor.e Christian Bauckhage Deutsche Telekom Laboratories christian.bauckhage@telekom.e Jérôme Kunegis jerome.kunegis@ai-labor.e
More informationDi usion on Social Networks. Current Version: June 6, 2006 Appeared in: Économie Publique, Numéro 16, pp 3-16, 2005/1.
Di usion on Social Networks Matthew O. Jackson y Caltech Leeat Yariv z Caltech Current Version: June 6, 2006 Appeare in: Économie Publique, Numéro 16, pp 3-16, 2005/1. Abstract. We analyze a moel of i
More informationMeasures of distance between samples: Euclidean
4- Chapter 4 Measures of istance between samples: Eucliean We will be talking a lot about istances in this book. The concept of istance between two samples or between two variables is funamental in multivariate
More informationA Blame-Based Approach to Generating Proposals for Handling Inconsistency in Software Requirements
International Journal of nowlege an Systems Science, 3(), -7, January-March 0 A lame-ase Approach to Generating Proposals for Hanling Inconsistency in Software Requirements eian Mu, Peking University,
More informationThroughputScheduler: Learning to Schedule on Heterogeneous Hadoop Clusters
ThroughputScheuler: Learning to Scheule on Heterogeneous Haoop Clusters Shehar Gupta, Christian Fritz, Bob Price, Roger Hoover, an Johan e Kleer Palo Alto Research Center, Palo Alto, CA, USA {sgupta, cfritz,
More informationLecture L25-3D Rigid Body Kinematics
J. Peraire, S. Winall 16.07 Dynamics Fall 2008 Version 2.0 Lecture L25-3D Rigi Boy Kinematics In this lecture, we consier the motion of a 3D rigi boy. We shall see that in the general three-imensional
More informationWeb Appendices of Selling to Overcon dent Consumers
Web Appenices of Selling to Overcon ent Consumers Michael D. Grubb A Option Pricing Intuition This appenix provies aitional intuition base on option pricing for the result in Proposition 2. Consier the
More informationCost Efficient Datacenter Selection for Cloud Services
Cost Efficient Datacenter Selection for Clou Services Hong u, Baochun Li henryxu, bli@eecg.toronto.eu Department of Electrical an Computer Engineering University of Toronto Abstract Many clou services
More informationExploratory Optimal Latin Hypercube Designs for Computer Simulated Experiments
Thailan Statistician July 0; 9() : 7-93 http://statassoc.or.th Contribute paper Exploratory Optimal Latin Hypercube Designs for Computer Simulate Experiments Rachaaporn Timun [a,b] Anamai Na-uom* [a,b]
More informationImproving Direct Marketing Profitability with Neural Networks
Volume 9 o.5, September 011 Improving Direct Marketing Profitability with eural etworks Zaiyong Tang Salem State University Salem, MA 01970 ABSTRACT Data mining in irect marketing aims at ientifying the
More informationTO DETERMINE THE SHELF LIFE OF IBUPROFEN SOLUTION
TO DETERMINE THE SHELF LIFE OF IBUPROFEN SOLUTION AIM: To etermine the shelf life of solution using accelerate stability stuies. MATERIAL REQUIREMENT, ethanol, phenolphthalein, soium hyroxie (0.1N), glass
More informationState of Louisiana Office of Information Technology. Change Management Plan
State of Louisiana Office of Information Technology Change Management Plan Table of Contents Change Management Overview Change Management Plan Key Consierations Organizational Transition Stages Change
More informationISSN: 2277-3754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 12, June 2014
ISSN: 77-754 ISO 900:008 Certifie International Journal of Engineering an Innovative echnology (IJEI) Volume, Issue, June 04 Manufacturing process with isruption uner Quaratic Deman for Deteriorating Inventory
More informationAnswers to the Practice Problems for Test 2
Answers to the Practice Problems for Test 2 Davi Murphy. Fin f (x) if it is known that x [f(2x)] = x2. By the chain rule, x [f(2x)] = f (2x) 2, so 2f (2x) = x 2. Hence f (2x) = x 2 /2, but the lefthan
More informationOptimal Control Policy of a Production and Inventory System for multi-product in Segmented Market
RATIO MATHEMATICA 25 (2013), 29 46 ISSN:1592-7415 Optimal Control Policy of a Prouction an Inventory System for multi-prouct in Segmente Market Kuleep Chauhary, Yogener Singh, P. C. Jha Department of Operational
More informationTowards a Framework for Enterprise Architecture Frameworks Comparison and Selection
Towars a Framework for Enterprise Frameworks Comparison an Selection Saber Aballah Faculty of Computers an Information, Cairo University Saber_aballah@hotmail.com Abstract A number of Enterprise Frameworks
More informationUnsteady Flow Visualization by Animating Evenly-Spaced Streamlines
EUROGRAPHICS 2000 / M. Gross an F.R.A. Hopgoo Volume 19, (2000), Number 3 (Guest Eitors) Unsteay Flow Visualization by Animating Evenly-Space Bruno Jobar an Wilfri Lefer Université u Littoral Côte Opale,
More informationDetecting Possibly Fraudulent or Error-Prone Survey Data Using Benford s Law
Detecting Possibly Frauulent or Error-Prone Survey Data Using Benfor s Law Davi Swanson, Moon Jung Cho, John Eltinge U.S. Bureau of Labor Statistics 2 Massachusetts Ave., NE, Room 3650, Washington, DC
More informationOption Pricing for Inventory Management and Control
Option Pricing for Inventory Management an Control Bryant Angelos, McKay Heasley, an Jeffrey Humpherys Abstract We explore the use of option contracts as a means of managing an controlling inventories
More informationCross-Over Analysis Using T-Tests
Chapter 35 Cross-Over Analysis Using -ests Introuction his proceure analyzes ata from a two-treatment, two-perio (x) cross-over esign. he response is assume to be a continuous ranom variable that follows
More informationMandate-Based Health Reform and the Labor Market: Evidence from the Massachusetts Reform
Manate-Base Health Reform an the Labor Market: Evience from the Massachusetts Reform Jonathan T. Kolsta Wharton School, University of Pennsylvania an NBER Amana E. Kowalski Department of Economics, Yale
More informationAPPLICATION OF CALCULUS IN COMMERCE AND ECONOMICS
Application of Calculus in Commerce an Economics 41 APPLICATION OF CALCULUS IN COMMERCE AND ECONOMICS æ We have learnt in calculus that when 'y' is a function of '', the erivative of y w.r.to i.e. y ö
More informationHere the units used are radians and sin x = sin(x radians). Recall that sin x and cos x are defined and continuous everywhere and
Lecture 9 : Derivatives of Trigonometric Functions (Please review Trigonometry uner Algebra/Precalculus Review on the class webpage.) In this section we will look at the erivatives of the trigonometric
More informationAn intertemporal model of the real exchange rate, stock market, and international debt dynamics: policy simulations
This page may be remove to conceal the ientities of the authors An intertemporal moel of the real exchange rate, stock market, an international ebt ynamics: policy simulations Saziye Gazioglu an W. Davi
More informationRUNESTONE, an International Student Collaboration Project
RUNESTONE, an International Stuent Collaboration Project Mats Daniels 1, Marian Petre 2, Vicki Almstrum 3, Lars Asplun 1, Christina Björkman 1, Carl Erickson 4, Bruce Klein 4, an Mary Last 4 1 Department
More informationA Data Placement Strategy in Scientific Cloud Workflows
A Data Placement Strategy in Scientific Clou Workflows Dong Yuan, Yun Yang, Xiao Liu, Jinjun Chen Faculty of Information an Communication Technologies, Swinburne University of Technology Hawthorn, Melbourne,
More informationGame Theoretic Modeling of Cooperation among Service Providers in Mobile Cloud Computing Environments
2012 IEEE Wireless Communications an Networking Conference: Services, Applications, an Business Game Theoretic Moeling of Cooperation among Service Proviers in Mobile Clou Computing Environments Dusit
More informationFactor Prices and International Trade: A Unifying Perspective
Factor Prices an International Trae: A Unifying Perspective Ariel Burstein UCLA an NBER Jonathan Vogel Columbia an NBER October 20 Abstract How o trae liberalizations a ect relative factor prices an to
More informationPythagorean Triples Over Gaussian Integers
International Journal of Algebra, Vol. 6, 01, no., 55-64 Pythagorean Triples Over Gaussian Integers Cheranoot Somboonkulavui 1 Department of Mathematics, Faculty of Science Chulalongkorn University Bangkok
More informationSafety Management System. Initial Revision Date: Version Revision No. 02 MANUAL LIFTING
Revision Preparation: Safety Mgr Authority: Presient Issuing Dept: Safety Page: Page 1 of 11 Purpose is committe to proviing a safe an healthy working environment for all employees. Musculoskeletal isorers
More informationA New Pricing Model for Competitive Telecommunications Services Using Congestion Discounts
A New Pricing Moel for Competitive Telecommunications Services Using Congestion Discounts N. Keon an G. Ananalingam Department of Systems Engineering University of Pennsylvania Philaelphia, PA 19104-6315
More informationExponential Functions: Differentiation and Integration. The Natural Exponential Function
46_54.q //4 :59 PM Page 5 5 CHAPTER 5 Logarithmic, Eponential, an Other Transcenental Functions Section 5.4 f () = e f() = ln The inverse function of the natural logarithmic function is the natural eponential
More informationAn Introduction to Event-triggered and Self-triggered Control
An Introuction to Event-triggere an Self-triggere Control W.P.M.H. Heemels K.H. Johansson P. Tabuaa Abstract Recent evelopments in computer an communication technologies have le to a new type of large-scale
More informationCalibration of the broad band UV Radiometer
Calibration of the broa ban UV Raiometer Marian Morys an Daniel Berger Solar Light Co., Philaelphia, PA 19126 ABSTRACT Mounting concern about the ozone layer epletion an the potential ultraviolet exposure
More informationOptimizing Multiple Stock Trading Rules using Genetic Algorithms
Optimizing Multiple Stock Traing Rules using Genetic Algorithms Ariano Simões, Rui Neves, Nuno Horta Instituto as Telecomunicações, Instituto Superior Técnico Av. Rovisco Pais, 040-00 Lisboa, Portugal.
More informationHull, Chapter 11 + Sections 17.1 and 17.2 Additional reference: John Cox and Mark Rubinstein, Options Markets, Chapter 5
Binomial Moel Hull, Chapter 11 + ections 17.1 an 17.2 Aitional reference: John Cox an Mark Rubinstein, Options Markets, Chapter 5 1. One-Perio Binomial Moel Creating synthetic options (replicating options)
More informationPredicting Television Ratings and Its Application to Taiwan Cable TV Channels
2n International Symposium on Computer, Communication, Control an Automation (3CA 2013) Preicting Television Ratings an Its Application to Taiwan Cable TV Channels Hui-Ling Huang Department of Biological
More informationModeling and Predicting Popularity Dynamics via Reinforced Poisson Processes
Proceeings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Moeling an Preicting Popularity Dynamics via Reinforce Poisson Processes Huawei Shen 1, Dashun Wang 2, Chaoming Song 3, Albert-László
More informationMinimum-Energy Broadcast in All-Wireless Networks: NP-Completeness and Distribution Issues
Minimum-Energy Broacast in All-Wireless Networks: NP-Completeness an Distribution Issues Mario Čagal LCA-EPFL CH-05 Lausanne Switzerlan mario.cagal@epfl.ch Jean-Pierre Hubaux LCA-EPFL CH-05 Lausanne Switzerlan
More informationThe Optimality of Naive Bayes
The Optimality of Naive Bayes Harry Zhang Faculty of Computer Science University of New Brunswick Fredericton, New Brunswick, Canada email: hzhang@unbca E3B 5A3 Abstract Naive Bayes is one of the most
More informationProduct Differentiation for Software-as-a-Service Providers
University of Augsburg Prof. Dr. Hans Ulrich Buhl Research Center Finance & Information Management Department of Information Systems Engineering & Financial Management Discussion Paper WI-99 Prouct Differentiation
More informationAchieving quality audio testing for mobile phones
Test & Measurement Achieving quality auio testing for mobile phones The auio capabilities of a cellular hanset provie the funamental interface between the user an the raio transceiver. Just as RF testing
More informationMathematical Models of Therapeutical Actions Related to Tumour and Immune System Competition
Mathematical Moels of Therapeutical Actions Relate to Tumour an Immune System Competition Elena De Angelis (1 an Pierre-Emmanuel Jabin (2 (1 Dipartimento i Matematica, Politecnico i Torino Corso Duca egli
More informationHow To Connect Two Servers Together In A Data Center Network
DPillar: Scalable Dual-Port Server Interconnection for Data Center Networks Yong Liao ECE Department University of Massachusetts Amherst, MA 3, USA Dong Yin Automation Department Northwestern Polytech
More informationThe most common model to support workforce management of telephone call centers is
Designing a Call Center with Impatient Customers O. Garnett A. Manelbaum M. Reiman Davison Faculty of Inustrial Engineering an Management, Technion, Haifa 32000, Israel Davison Faculty of Inustrial Engineering
More informationConsumer Referrals. Maria Arbatskaya and Hideo Konishi. October 28, 2014
Consumer Referrals Maria Arbatskaya an Hieo Konishi October 28, 2014 Abstract In many inustries, rms rewar their customers for making referrals. We analyze the optimal policy mix of price, avertising intensity,
More informationFAST JOINING AND REPAIRING OF SANDWICH MATERIALS WITH DETACHABLE MECHANICAL CONNECTION TECHNOLOGY
FAST JOINING AND REPAIRING OF SANDWICH MATERIALS WITH DETACHABLE MECHANICAL CONNECTION TECHNOLOGY Jörg Felhusen an Sivakumara K. Krishnamoorthy RWTH Aachen University, Chair an Insitute for Engineering
More informationParameterized Algorithms for d-hitting Set: the Weighted Case Henning Fernau. Univ. Trier, FB 4 Abteilung Informatik 54286 Trier, Germany
Parameterize Algorithms for -Hitting Set: the Weighte Case Henning Fernau Trierer Forschungsberichte; Trier: Technical Reports Informatik / Mathematik No. 08-6, July 2008 Univ. Trier, FB 4 Abteilung Informatik
More informationPatch Complexity, Finite Pixel Correlations and Optimal Denoising
Patch Complexity, Finite Pixel Correlations an Optimal Denoising Anat Levin Boaz Naler Freo Duran William T. Freeman Weizmann Institute MIT CSAIL Abstract. Image restoration tasks are ill-pose problems,
More informationSearch Advertising Based Promotion Strategies for Online Retailers
Search Avertising Base Promotion Strategies for Online Retailers Amit Mehra The Inian School of Business yeraba, Inia Amit Mehra@isb.eu ABSTRACT Web site aresses of small on line retailers are often unknown
More informationDynamic Network Security Deployment Under Partial Information
Dynamic Network Security Deployment Uner Partial nformation nvite Paper) George Theoorakopoulos EPFL Lausanne, Switzerlan Email: george.theoorakopoulos @ epfl.ch John S. Baras University of Marylan College
More informationNotes on tangents to parabolas
Notes on tangents to parabolas (These are notes for a talk I gave on 2007 March 30.) The point of this talk is not to publicize new results. The most recent material in it is the concept of Bézier curves,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationThe Standard & Modified Proctor Density Tests
Revise 2003, WKS Datasheet No. 7 OHAWK COLLEGE OF APPLIED ARTS AND TECHNOLOGY BUILDING AND CONSTRUCTION SCIENCES DEPARTENT The Stanar & oifie Proctor Density Tests INTRODUCTION: In placing fill, it is
More informationReading: Ryden chs. 3 & 4, Shu chs. 15 & 16. For the enthusiasts, Shu chs. 13 & 14.
7 Shocks Reaing: Ryen chs 3 & 4, Shu chs 5 & 6 For the enthusiasts, Shu chs 3 & 4 A goo article for further reaing: Shull & Draine, The physics of interstellar shock waves, in Interstellar processes; Proceeings
More informationUsing research evidence in mental health: user-rating and focus group study of clinicians preferences for a new clinical question-answering service
DOI: 10.1111/j.1471-1842.2008.00833.x Using research evience in mental health: user-rating an focus group stuy of clinicians preferences for a new clinical question-answering service Elizabeth A. Barley*,
More informationOptimal Energy Commitments with Storage and Intermittent Supply
Submitte to Operations Research manuscript OPRE-2009-09-406 Optimal Energy Commitments with Storage an Intermittent Supply Jae Ho Kim Department of Electrical Engineering, Princeton University, Princeton,
More informationGiven three vectors A, B, andc. We list three products with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B);
1.1.4. Prouct of three vectors. Given three vectors A, B, anc. We list three proucts with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B); a 1 a 2 a 3 (A B) C = b 1 b 2 b 3 c 1 c 2 c 3 where the
More informationMinimizing Makespan in Flow Shop Scheduling Using a Network Approach
Minimizing Makespan in Flow Shop Scheuling Using a Network Approach Amin Sahraeian Department of Inustrial Engineering, Payame Noor University, Asaluyeh, Iran 1 Introuction Prouction systems can be ivie
More informationIntroduction to Logistic Regression
OpenStax-CNX module: m42090 1 Introduction to Logistic Regression Dan Calderon This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract Gives introduction
More informationSoftware Diversity for Information Security
for Information Security Pei-yu Chen, Gaurav Kataria an Ramayya Krishnan,3 Heinz School, Tepper School an 3 Cylab Carnegie Mellon University Abstract: In this paper we analyze a software iversification-base
More informationRisk Adjustment for Poker Players
Risk Ajustment for Poker Players William Chin DePaul University, Chicago, Illinois Marc Ingenoso Conger Asset Management LLC, Chicago, Illinois September, 2006 Introuction In this article we consier risk
More informationAN ANALYSIS OF A WAR-LIKE CARD GAME. Introduction
AN ANALYSIS OF A WAR-LIKE CARD GAME BORIS ALEXEEV AND JACOB TSIMERMAN Abstract. In his book Mathematical Mind-Benders, Peter Winkler poses the following open problem, originally due to the first author:
More information