AverageCase Analysis of a Nearest Neighbor Algorithm. Moett Field, CA USA. in accuracy. neighbor.


 Garry Hines
 1 years ago
 Views:
Transcription
1 From Proceeings of the Thirteenth International Joint Conference on Articial Intelligence (1993). Chambery, France: Morgan Kaufmann AverageCase Analysis of a Nearest Neighbor Algorithm Pat Langley Learning Systems Department Siemens Corporate Research 755 College Roa East Princeton, NJ USA Wayne Iba AI Research Branch Mail Stop NASA Ames Research Center Moett Fiel, CA USA Abstract In this paper we present an averagecase analysis of the nearest neighbor algorithm, a simple inuction metho that has been stuie by many researchers. Our analysis assumes a conjunctive target concept, noisefree Boolean attributes, an a uniform istribution over the instance space. We calculate the probability that the algorithm will encounter a test instance that is istance from the prototype of the concept, along with the probability that the nearest store training case is istance e from this test instance. From this we compute the probability of correct classication as a function of the number of observe training cases, the number of relevant attributes, an the number of irrelevant attributes. We also explore the behavioral implications of the analysis by presenting preicte learning curves for articial omains, an give experimental results on these omains as a check on our reasoning. 1. Nearest Neighbor Algorithms Most learning methos form some abstraction from experience an store this structure in memory. The el has explore a wie range of such structures, incluing ecision trees (Quinlan, 1986), multilayer networks (Rumelhart & McClellan, 1986), an probabilistic summaries (Fisher, 1987). However, in recent years there has been growing interest in methos that store instances or cases in memory, an that apply this specic knowlege irectly to new situations. This approach goes by many names, incluing instancebase learning an casebase reasoning, an one can apply it to many ierent tasks. The simplest an most wiely stuie class of techniques, often calle nearest neighbor algorithms, originate in the el of pattern recognition (Cover & Hart, 1967; Dasarathy, 1991) an applies to classication tasks. In the basic metho, learning appears almost trivial { one simply stores each training instance in memory. The power of the metho comes from the retrieval process. Given a new test instance, one ns the store training case that is nearest accoring to some istance measure, notes the class of the retrieve case, an preicts the new instance will have the same class. Many variants exist on this basic algorithm. For instance, Stanll an Waltz (1986) have stuie a version that retrieves the k closest instances an bases preictions on a weighte vote, incorporating the istance of each store instance from the test case; such techniques are often referre to as knearest neighbor algorithms. Others (Cover & Hart, 1967; Aha, Kibler, & Albert, 1991) have stuie an alternative approach that stores cases in memory only upon making an error, thus reucing memory loa an retrieval time with little reuction in accuracy. We woul like to unerstan the learning behavior of this intriguing class of methos uner various conitions. Aha et al. present a PAC analysis of one such algorithm, but our aim is to obtain tighter bouns that we can irectly relate to experimental results. To this en, we ecie to pursue an averagecase analysis along the lines evelope by Hirschberg an Pazzani (1991) for logical inuction methos an by Langley, Iba, an Thompson (1992) for probabilistic ones. For the sake of tractability, we focuse our eorts on the most basic of the instancebase techniques, which stores all training cases an bases its preiction on the single nearest neighbor. However, the simplicity of this metho oes not mean it lacks power. Aha et al. (1991) report the results of an experimental stuy that compare the algorithm (which they calle IB1 ) to Quinlan's (1986) more sophisticate C4 algorithm for inucing ecision trees. Table 1 contains the results on four natural omains, two of them (\Clevelan" an \Hungarian") involving preiction of heart isease from symptoms, another concerning the iagnosis of primary tumors, an a fourth involving preiction of party aliations for members of Congress from their voting recors. For each omain, Aha et al. traine the algorithms on approximately 80% of the cases an teste them on the remaining instances, averaging over 50 ierent partitions. On the Clevelan ata, the two algorithms' performance was inistinguishable, an IB1's behavior on the tumor an voting recors nearly reache C4's level. Although the basic nearest neighbor algorithm fare much worse on the Hungarian ata set, simple moications prouce accuracy comparable to that for C4 (Aha, 1990), an its performance on the other omains argues that it eserves closer inspection in any case. In the remainer of this paper, we report the initial results of our averagecase analysis of the simple nearest neighbor metho. We begin by presenting the as
2 Analysis of a Nearest Neighbor Algorithm 890 Table 1. Percentage accuracies for a nearest neighbor metho (IB1) an a ecisiontree algorithm (C4) on four classication omains, taken from Aha et al. (1991). Each column reports both average accuracy of classication an stanar error. Domain IB1 C4 Clevelan Hungarian Primary tumor Voting recors sumptions of the analysis, followe by our erivation of the equations for preicting accuracy as a function of three variables. After this, we examine the implications of the analysis for the algorithm's behavior, comparing preicte learning curves for articial omains of iering iculty. Finally, we note some limitations of the analysis an suggest irections for future research. 2. An AverageCase Analysis Recall that the algorithm uner stuy stores all training cases in memory. Upon encountering a test instance, it retrieves the nearest observe case an preicts the same class as that for the store instance. If a tie occurs, the version we will examine selects one of the nearest cases at ranom. We woul like to compute A n, the probability of correct classication (i.e., the preictive accuracy) after n training instances as a function of characteristics of the omain. Our analysis will assume that there exist two classes, C an C, ene over r relevant Boolean attributes A j an i irrelevant ones. We will also assume that the probability of occurrence P (A j ) = 1=2 for each such attribute A j, generating a uniform istribution over the instance space, an that the target concept is conjunctive, giving P (C) = P (A) r = 1=2 r. Finally, we will measure the istance between two cases as the number of attributes on which they ier. Because there are r + i attributes, there are exactly r+i+1 istinct istances, ranging from zero to r + i, that can occur. We will use I to enote an arbitrary test case that is istance from the prototype P for the positive class. We will also refer to J e as an arbitrary training case that is istance e from test instance I. We will often treat groups of test cases as a class base on their istance from the prototype; similarly, we will consier groups of training instances base on their istance e from a given class of test cases. Our strategy consiers positive an negative test instances separately. We will use A(C) n to refer to the probability of correct classication given that the algorithm encounters a positive test case after n training cases, an we will use A( C)n for the preictive accuracy given a negative test instance. The overall accuracy of the nearest neighbor algorithm after n training cases is A n = P (C)A(C) n + P ( C)A( C)n ; (1) where P (C) is the probability of a positive test case an P ( C) = 1? P (C) is the probability of a negative one. Let us eal with the accuracy on negative test cases rst. We can view this term as a weighte sum of the accuracies for ierent types of test cases I, where is the istance from the prototype. 1 We must sum over all possible istances from the prototype at which a test case can occur (except for = 0), multiplying the probability of each type by its accuracy B( C) ;n accoring to the possible contents of memory after n training instances: A( C)n = Xr+i =1 where the accuracy component? 2 r+i? 2 i B( C);n ; (2) B( C);n = N(J?i?1) n +T (J?i;+i)? n +F (J +i+1 ) n (3) can be further ivie into three terms, which are relate to the regions shown in Figure 1. These correspon, respectively, to situations in which the closest store training case is near enough to I to ensure correct classication, in which ties can occur because the closest case may belong to either class, an in which the closest instance is far enough from I to ensure correct preiction. The initial term, N(J?i?1) n, represents the contribution to the accuracy that results from the rst of these situations, when the nearest store training case J e is closer to I than the latter is to the nearest positive instance (i.e., 0 e < ). When this occurs, the algorithm will correctly classify I as negative. To see this, consier the innermost region in Figure 1, an note that the algorithm must observe only one training case within the region for this to transpire. The probability that any given training instance will fall within e or fewer steps of the test case is W (J e ) = 1 ex r + i ; (4) 2 r+i j j=0 giving the probability 1?W (J e ) that this will not occur. Thus, the probability that (after n training instances) the algorithm will have seen at least one such case is N(J e ) n = 1? [1? W (J e )] n ; (5) which gives the rst term in the enition of B( C);n in equation (3). 1. When some features are irrelevant, there are multiple positive instances, any of which we can select as the prototype without loss of generality.
3 Analysis of a Nearest Neighbor Algorithm i r + i I i P We can further ecompose the rst term in the prouct into the probability that exactly k of the n training cases are istance e from the test case an that the remaining n? k are at some greater istance (since the k cases are the nearest ones), giving " # k n e M(J e ) n;k = [1 k 2 r+i? W (J e )] n?k (7) as the formal expression. To compute the accuracy given a tie among k store cases, we must sum over the possible numbers j of negative instances, in each case multiplying the subaccuracy by the probability of that occurrence. This gives E(J e )? k = kx j=0 j k k [1? V (C)? e; j ]j [V (C)? e; ]k?j ; (8) Figure 1. Regions of interest for computing the probability of correctly classifying a negative test case I, istance from prototype P, using the store training case J e, which is istance e from I. The inner region epicts the probability that e <, the outer region shows the probability that e > + i, an the central region inicates the probability that e + i. The secon term from this equation, T (J?i;+i)? n, is the contribution to the accuracy when the istance to the nearest store training instance J e is between the istances to the nearest possible positive instance an the farthest one (i.e.,?i e +i). This correspons to the central region in the gure. In this situation, con icts can occur uring the classication process, in that the algorithm may retrieve both positive an negative training cases at istance e from the test case. Ties are possible in this region because, given i irrelevant attributes, there are 2 i positive instances that can be locate i steps or less away from the prototype. For any given test case I that is istance from the prototype, the nearest store positive case may be i steps away from the prototype in the irection towar I, i steps away from the prototype in the irection away from I, or somewhere between these two extremes. Negative instances can also occur anywhere within this region, making the entire mile ban in the gure open to the possibility of ties. To hanle all possible ties, we must sum over all istances e between an + i, then sum over the possible numbers k of nearest instances (positive or negative) that have been store at each such istance. In each case, we must multiply the probability M(J e ) n;k of that number occurring by the accuracy E(J e )? k that results from such a tie. We can state this formally as T (J a;b )? n = bx nx e=a k=1 M(J e ) n;k E(J e )? k : (6) where j=k is the expecte accuracy when one selects a training case at ranom from a set that contains j out of k negative instances. The term V (C)? e; represents the probability of a positive instance J e given that the instance is e steps away from negative test case I, which is in turn steps away from the prototype. We can expan this term to V (C)? min(r;) X e; = k=1? r i k??k? e?k e ; (9) provie k e, k r, an? k i, an to zero otherwise. This expression sums over ierent ways in which istance can occur, with some steps along relevant attributes an others along irrelevant ones, multiplie by the probability that an instance e steps away from the resulting test case will be positive. The nal term from equation (3), F (J +i+1 ) n, species the contribution to the accuracy when the nearest training instance J e has istance greater than + i from test case I (i.e., + i < e r + i), an thus falls within the outermost region in Figure 1, which contains only negative instances. As with the N(J?i?1) n term, the algorithm will correctly classify I as negative whenever this occurs. The chance that this situation will arise is F (I e ) n = Xr+i 2 r+i j=e r + i j 3 5n ; (10) which is the probability that any given training case will fall at istance e or greater from the test instance, taken to power n to generate the probability that every training instance seen so far satises this conition. Now we can turn to A(C) n, the accuracy on positive test cases after n training instances. The situation here is simpler than for negative test cases, but still nontrivial. The algorithm is guarantee to classify a positive test case I correctly only when the nearest store training instance is itself the test case (i.e., e = 0). Ties can occur anywhere in the range 1 e i, giving the expression
4 Analysis of a Nearest Neighbor Algorithm 892 j=0 A(C) n = N(J 0 ) n + T (J 1;i ) + n : (11) Because one can treat any positive instance as the prototype, there is no nee to sum over ierent istances here. Moreover, since no positive instance can be more than i steps away from any other, we can omit the thir term of equation (3), F (J i+1 ) n, which is always zero. The term for hanling ties is analogous to equation (6) for the negative situation, but we must revise the enition for E(J e )? k in equation (8) to kx E(J e ) + k = k? j k [1? V (C) + e ] j [V (C) + e ] k?j : (12) k j Note that here we must use the numerator k? j rather than k, in that we are ealing with positive test cases. Moreover, we must take a ierent approach to computing V (C) + e, the probability of a positive instance J e given that the instance is e steps away from positive test case I. In this case, we have V (C) + e = e e ; (13) when i e, but zero in other situations. Taken together, the enitions for A(C) n, A( C) n, an their component terms let us preict the overall accuracy A n for the nearest neighbor algorithm as a function of the number of training instances n, the number of relevant attributes r, an the number of irrelevant attributes i. 3. Behavioral Implications of the Analysis Although the equations in the previous sections provie a formal characterization of the nearest neighbor algorithm's behavior, their implications are not obvious. To better unerstan the eects of omain characteristics, we systematically varie certain omain parameters an examine the preicte results. In aition to computing theoretical preictions, we also collecte experimental learning curves that summarize the algorithm's actual behavior. Each atum on these curves reports the classication accuracy average over 100 runs on ranomly generate training sets, measure over the entire space of uniformly istribute noisefree instances. In each case, we boun the mean accuracy with 95% conence intervals to show the egree to which our preicte learning curves t the observe ones. These experimental results provie an important check on our reasoning, an they ientie a number of problems uring evelopment of the analysis. Figure 2 shows the eects of the number of relevant attributes in the conjunctive target concept. For this stuy, we hel the number of irrelevant attributes i constant at one, an we varie both the number of training instances an the number of relevant attributes r. As typical with learning curves, the accuracy starts low an graually improves as the algorithm encounters more training instances. The eects of target complexity also make sense. Increasing the number of relevant features Probability of correct classification relevant 2 relevants 4 relevants Number of training instances Figure 2. Preictive accuracy of the nearest neighbor algorithm on a conjunctive concept, assuming the presence of one irrelevant attribute, as a function of training instances an the number of relevant attributes. The lines represent theoretical learning curves, whereas the error bars inicate experimental results. shoul increase the overall number of negative instances, giving higher accuracy early in the inuction process; however, this factor also increases the total number of possible instances, requiring more training cases to reach asymptote an proucing a crossover eect. The learning rate seems to egrae gracefully with increasing complexity, an the theoretical an actual learning curves are in close agreement, which lens conence to the analysis. The sensitivity of the nearest neighbor algorithm to irrelevant attributes is more ramatic, as shown in Figure 3. This graph summarizes the results of a similar stuy of the interaction between the number of training instances n an the number of irrelevant attributes. Here we hel the number of relevant attributes constant at two, an we examine three levels of the i parameter. As with the previous stuy, the egraation in learning rate is graceful, but the eect is somewhat greater. The ierence between the two results appears more signicant when one realizes that increasing i oes not reuce the proportion of positive instances, as oes increasing the number of relevant attributes. These observations are consistent with Aha's (1990) reports on the sensitivity of nearest neighbor methos to the number of irrelevant attributes. We can also compare the behavior of the nearest neighbor algorithm to that of other inuction methos for which averagecase analyses exist. In particular, Pazzani an Sarrett (1992) have stuie the Wholist algorithm, which initializes its concept escription to the conjunction of features in the rst positive training instance, then removes any feature that fails to occur in later positive instances. Similarly, Langley, Iba, an Thompson (1992) have analyze the behavior of the Bayesian classier, a simple probabilistic metho that stores observe
5 Analysis of a Nearest Neighbor Algorithm 893 Probability of correct classification irrelevant 3 irrelevants 5 irrelevants Training instances require Nearest neighbor Bayesian classifier Number of training instances Number of irrelevant attributes Figure 3. Preictive accuracy of the nearest neighbor algorithm on a conjunctive concept, assuming the presence of two relevant attributes, as a function of training instances an the number of irrelevant attributes. The lines represent theoretical learning curves, whereas the error bars inicate experimental results. base rates an conitional probabilities. As Pazzani an Sarrett note, Wholist's learning rate is unaecte by the number of relevant attributes, so their algorithm clearly scales up better on this imension than oes the nearest neighbor technique. Comparison to the Bayesian classier on this factor is more icult, in that Langley et al.'s stuy examine equal probabilities for the two classes, whereas the current analysis assumes that they ier. In some omains, eective learning relies more on the ability to hanle many irrelevant features than many relevant ones. In this vein, we have shown analytically that the number of training instances require for Wholist to achieve a given level of accuracy increases only with the logarithm of the number of irrelevant attributes. Although we have not yet erive similar analytic relations for the nearest neighbor or probabilistic methos, we can use the existing analyses to estimate ability to scale on this imension. Figure 4 graphs the preicte number of training instances neee to achieve 90% accuracy for each algorithm as a function of the number of irrelevant attributes, assuming a target concept involving only one relevant feature an a uniform istribution of instances. The analyses o not provie these quantities irectly, but one can interpolate them from the theoretical learning curves. The gure reveals that the Bayesian classier scales well to increasing numbers of irrelevant attributes, with the epenent measure growing as an approximate linear function of this factor. In contrast, the number of training instances require by the nearest neighbor metho grows much faster, although we cannot yet etermine the precise superlinear relation. These results are also consistent with Aha's (1990) conclusions about the response of stanar instancebase methos to many irrelevant features. Figure 4. Theoretical number of training instances (interpolate) require to reach 90% accuracy by a nearest neighbor algorithm an a simple Bayesian classier on a conjunctive concept, assuming the presence of one relevant attribute, as a function of the number of irrelevant attributes. However, the above comparisons are not entirely fair. Neither the Wholist algorithm nor the Bayesian classier are esigne to hanle isjunctive concepts, which present no obstacles to even the simplest nearest neighbor algorithm. Our focus on conjunctive concepts in the current analysis has obscure this strength. Also, Aha (1990) has evelope a variant of the nearest neighbor algorithm that retains statistics on the usefulness of each attribute, an he has shown that this approach fares better in omains with many irrelevant terms. Nevertheless, the ability to make comparisons of the above type is one avantage of careful formal analyses, an they have provie insights about the relative strengths of the ierent learning algorithms. 4. General Discussion In this paper we presente an averagecase analysis of the most basic nearest neighbor algorithm. Our treatment assumes that the target concept is conjunctive, that instances are free of noise, that attributes are Boolean, an that instances are uniformly istribute. Given information about the number of relevant an irrelevant attributes, our equations let us compute the expecte classication accuracy after a given number of training instances. To explore the implications of the analysis, we plotte the preicte behavior of the algorithm as a function of these three factors, ning graceful egraation as the number of relevants r an irrelevants i increase, but ning a stronger eect for the secon. As a check on our analysis, we ran the algorithm on articial omains with the same characteristics. The preicte behavior closely t that foun in the experiments, but only after correcting several errors in our reasoning that the empirical stuies reveale.
6 Analysis of a Nearest Neighbor Algorithm 894 These results begin to account for the wie range of performance observe for the algorithm by Aha an others on natural omains. However, a full explanation will require several extensions to the analysis. In particular, we must incorporate the inuence of both class an attribute noise, as we have one in earlier analyses (Iba & Langley, 1992; Langley et al., 1992). We must also hanle situations in which each attribute follows a separate probability istribution, following the approach taken by Hirschberg an Pazzani (1991). Even more important, we must exten the framework to hanle broaer classes of target concepts. Nearest neighbor methos are well suite for M of N concepts, in which any M of the N features in the prototype are sucient for membership in the class. Since istance from the prototype plays a central role in the current analysis, we believe extening it to hanle such concepts will be quite feasible. Similarly, because the algorithm stores many training instances in memory, it can easily acquire isjunctive concepts that require multiple prototypes. Again, we hope that simple extensions to the existing framework will hanle this situation. We shoul also generalize the analysis to inclue knearest neighbor methos, following the lea recently provie by Turney (in press). Another irection for future work woul attempt to map the extene analysis onto natural omains in which there alreay exist experimental results with the metho. Given information about the istributions of attributes (which are available in the ata), along with estimates of the noise levels an target concepts (which require informe guesses), we can compare learning curves preicte by the theory with those observe in experimental runs. This approach woul exten the applicability of our averagecase moel beyon the articial omains to which we have limite our tests to ate. In summary, we believe that our initial analysis has provie some useful insights about the behavior of the basic nearest neighbor algorithm. These begin to explain why the algorithm compares favorably with more complex inuction methos on some omains but not others, an our results are consistent with intuitions about the algorithm's sensitivity to irrelevant attributes. We also believe the existing theoretical framework can be extene to hanle more challenging target concepts an other factors that complicate the learning task, thus proviing a soli base on which to carry out further stuies of instancebase learning. Acknowlegements We woul like to thank Stephanie Sage an Davi Aha for iscussions that helpe clarify our ieas, Davi Aha an three reviewers for useful comments, an Nils Nilsson for his support an encouragement. The rst author contribute to this work while he was at the Center for the Stuy of Language an Information, Stanfor University, an the Institute for the Stuy of Learning an Expertise. References Aha, D. W. (1990). A stuy of instancebase algorithms for supervise learning tasks: Mathematical, empirical, an psychological evaluations. Doctoral issertation, Department of Information & Computer Science, University of California, Irvine. Aha, D. W., Kibler, D., & Albert, M. K. (1991). Instancebase learning algorithms. Machine Learning, 6, 37{ 66. Cover, T. M., & Hart, P. E. (1967). Nearest neighbor pattern classication. IEEE Transactions on Information Theory, 13, 21{27. Dasarathy, B. V. (E.). (1991). Nearest neighbor (NN) norms: NN pattern classication techniques. Los Alamitos, CA: IEEE Computer Society Press. Fisher, D. H. (1987). Knowlege acquisition via incremental conceptual clustering. Machine Learning, 2, 139{172. Hirschberg, D. S., & Pazzani, M. J. (1991). Averagecase analysis of a kcnf learning algorithm (Technical Report 91{50). Irvine: University of California, Department of Information & Computer Science. Iba, W., & Langley, P. (1992). Inuction of onelevel ecision trees. Proceeings of the Ninth International Conference on Machine Learning (pp. 233{240). Abereen, Scotlan: Morgan Kaufmann. Langley, P., Iba, W., & Thompson, K. (1992). An analysis of Bayesian classiers. Proceeings of the Tenth National Conference on Articial Intelligence (pp. 223{ 228). San Jose, CA: AAAI Press. Pazzani, M. J., & Sarrett, W. (1992). A framework for the average case analysis of conjunctive learning algorithms. Machine Learning, 9, 349{372. Quinlan, J. R. (1986). Inuction of ecision trees. Machine Learning, 1, 81{106. Rumelhart, D. E., & McClellan, J. L. (1986). On learning the past tenses of English verbs. In J. L. McClellan & D. E. Rumelhart (Es.), Parallel istribute processing: Explorations in the microstructure of cognition (Vol. 2). Cambrige, MA: MIT Press. Stanll, C., & Waltz, D. (1986). Towar memorybase reasoning. Communications of the ACM, 29, 1213{ Turney, P. (in press). Voting in instancebase learning: A theoretical analysis. Journal of Experimental an Theoretical Articial Intelligence.
Springs, Shocks and your Suspension
rings, Shocks an your Suspension y Doc Hathaway, H&S Prototype an Design, C. Unerstaning how your springs an shocks move as your race car moves through its range of motions is one of the basics you must
More information10.2 Systems of Linear Equations: Matrices
SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix
More informationApplications of Global Positioning System in Traffic Studies. Yi Jiang 1
Applications of Global Positioning System in Traffic Stuies Yi Jiang 1 Introuction A Global Positioning System (GPS) evice was use in this stuy to measure traffic characteristics at highway intersections
More information(We assume that x 2 IR n with n > m f g are twice continuously ierentiable functions with Lipschitz secon erivatives. The Lagrangian function `(x y) i
An Analysis of Newton's Metho for Equivalent Karush{Kuhn{Tucker Systems Lus N. Vicente January 25, 999 Abstract In this paper we analyze the application of Newton's metho to the solution of systems of
More informationOn Adaboost and Optimal Betting Strategies
On Aaboost an Optimal Betting Strategies Pasquale Malacaria 1 an Fabrizio Smerali 1 1 School of Electronic Engineering an Computer Science, Queen Mary University of Lonon, Lonon, UK Abstract We explore
More informationUCLA STAT 13 Introduction to Statistical Methods for the Life and Health Sciences. Chapter 9 Paired Data. Paired data. Paired data
UCLA STAT 3 Introuction to Statistical Methos for the Life an Health Sciences Instructor: Ivo Dinov, Asst. Prof. of Statistics an Neurology Chapter 9 Paire Data Teaching Assistants: Jacquelina Dacosta
More information9.3. Diffraction and Interference of Water Waves
Diffraction an Interference of Water Waves 9.3 Have you ever notice how people relaxing at the seashore spen so much of their time watching the ocean waves moving over the water, as they break repeately
More informationData Center Power System Reliability Beyond the 9 s: A Practical Approach
Data Center Power System Reliability Beyon the 9 s: A Practical Approach Bill Brown, P.E., Square D Critical Power Competency Center. Abstract Reliability has always been the focus of missioncritical
More informationCHAPTER 5 : CALCULUS
Dr Roger Ni (Queen Mary, University of Lonon)  5. CHAPTER 5 : CALCULUS Differentiation Introuction to Differentiation Calculus is a branch of mathematics which concerns itself with change. Irrespective
More informationMSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUMLIKELIHOOD ESTIMATION
MAXIMUMLIKELIHOOD ESTIMATION The General Theory of ML Estimation In orer to erive an ML estimator, we are boun to make an assumption about the functional form of the istribution which generates the
More informationIf you have ever spoken with your grandparents about what their lives were like
CHAPTER 7 Economic Growth I: Capital Accumulation an Population Growth The question of growth is nothing new but a new isguise for an ageol issue, one which has always intrigue an preoccupie economics:
More informationCh 10. Arithmetic Average Options and Asian Opitons
Ch 10. Arithmetic Average Options an Asian Opitons I. Asian Option an the Analytic Pricing Formula II. Binomial Tree Moel to Price Average Options III. Combination of Arithmetic Average an Reset Options
More informationFirewall Design: Consistency, Completeness, and Compactness
C IS COS YS TE MS Firewall Design: Consistency, Completeness, an Compactness Mohame G. Goua an XiangYang Alex Liu Department of Computer Sciences The University of Texas at Austin Austin, Texas 787121188,
More information_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page 189 PART III. Growth Theory: The Economy in the Very Long Run
189220_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page 189 PART III Growth Theory: The Economy in the Very Long Run 189220_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page 190 189220_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms
More informationOpen World Face Recognition with Credibility and Confidence Measures
Open Worl Face Recognition with Creibility an Confience Measures Fayin Li an Harry Wechsler Department of Computer Science George Mason University Fairfax, VA 22030 {fli, wechsler}@cs.gmu.eu Abstract.
More informationA Generalization of Sauer s Lemma to Classes of LargeMargin Functions
A Generalization of Sauer s Lemma to Classes of LargeMargin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/
More informationSensor Network Localization from Local Connectivity : Performance Analysis for the MDSMAP Algorithm
Sensor Network Localization from Local Connectivity : Performance Analysis for the MDSMAP Algorithm Sewoong Oh an Anrea Montanari Electrical Engineering an Statistics Department Stanfor University, Stanfor,
More informationPurpose of the Experiments. Principles and Error Analysis. ε 0 is the dielectric constant,ε 0. ε r. = 8.854 10 12 F/m is the permittivity of
Experiments with Parallel Plate Capacitors to Evaluate the Capacitance Calculation an Gauss Law in Electricity, an to Measure the Dielectric Constants of a Few Soli an Liqui Samples Table of Contents Purpose
More informationEnterprise Resource Planning
Enterprise Resource Planning MPC 6 th Eition Chapter 1a McGrawHill/Irwin Copyright 2011 by The McGrawHill Companies, Inc. All rights reserve. Enterprise Resource Planning A comprehensive software approach
More informationTraffic Delay Studies at Signalized Intersections with Global Positioning System Devices
Traffic Delay Stuies at Signalize Intersections with Global Positioning System Devices THIS FEATURE PRESENTS METHODS FOR APPLYING GPS DEVICES TO STUDY TRAFFIC DELAYS AT SIGNALIZED INTERSECTIONS. MOST OF
More informationPROBLEMS. A.1 Implement the COINCIDENCE function in sumofproducts form, where COINCIDENCE = XOR.
724 APPENDIX A LOGIC CIRCUITS (Corrispone al cap. 2  Elementi i logica) PROBLEMS A. Implement the COINCIDENCE function in sumofproucts form, where COINCIDENCE = XOR. A.2 Prove the following ientities
More informationDIFFRACTION AND INTERFERENCE
DIFFRACTION AND INTERFERENCE In this experiment you will emonstrate the wave nature of light by investigating how it bens aroun eges an how it interferes constructively an estructively. You will observe
More informationINFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES
1 st Logistics International Conference Belgrae, Serbia 2830 November 2013 INFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES Goran N. Raoičić * University of Niš, Faculty of Mechanical
More informationJON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT
OPTIMAL INSURANCE COVERAGE UNDER BONUSMALUS CONTRACTS BY JON HOLTAN if P&C Insurance Lt., Oslo, Norway ABSTRACT The paper analyses the questions: Shoul or shoul not an iniviual buy insurance? An if so,
More informationA Comparison of Performance Measures for Online Algorithms
A Comparison of Performance Measures for Online Algorithms Joan Boyar 1, Sany Irani 2, an Kim S. Larsen 1 1 Department of Mathematics an Computer Science, University of Southern Denmark, Campusvej 55,
More informationLecture 8: Expanders and Applications
Lecture 8: Expaners an Applications Topics in Complexity Theory an Pseuoranomness (Spring 013) Rutgers University Swastik Kopparty Scribes: Amey Bhangale, Mrinal Kumar 1 Overview In this lecture, we will
More informationThe Elastic Capacitor and its Unusual Properties
1 The Elastic Capacitor an its Unusual Properties Michael B. Partensky, Department of Chemistry, Braneis University, Waltham, MA 453 partensky@attbi.com The elastic capacitor (EC) moel was first introuce
More informationx4 x3 x2 x1 x0 E x4 x3 x2 x1 x0 E
Chapter 9 xercise 9. (a) Implement a output (ecimal) ecoer for outof coe using NAND gates. The outof coe represents ecimal igits as shown in the following table: i x x x x x 8 9 ach output z i of
More informationMemory Management. 3.1 Fixed Partitioning
Chapter 3 Memory Management In a multiprogramming system, in orer to share the processor, a number o processes must be kept in memory. Memory management is achieve through memory management algorithms.
More informationRobust Reading: Identification and Tracing of Ambiguous Names
Robust Reaing: Ientification an Tracing of Ambiguous Names Xin Li Paul Morie Dan Roth Department of Computer Science University of Illinois, Urbana, IL 61801 {xli1,morie,anr}@uiuc.eu Abstract A given entity,
More informationIdentification and Tracing of Ambiguous Names: Discriminative and Generative Approaches
Ientification an Tracing of Ambiguous Names: Discriminative an Generative Approaches Xin Li Paul Morie Dan Roth Department of Computer Science University of Illinois, Urbana, IL 61801 {xli1,morie,anr}@uiuc.eu
More informationRural Development Tools: What Are They and Where Do You Use Them?
Faculty Paper Series Faculty Paper 0009 June, 2000 Rural Development Tools: What Are They an Where Do You Use Them? By Dennis U. Fisher Professor an Extension Economist fisher@tamu.eu Juith I. Stallmann
More informationRisk Management for Derivatives
Risk Management or Derivatives he Greeks are coming the Greeks are coming! Managing risk is important to a large number o iniviuals an institutions he most unamental aspect o business is a process where
More informationIntegral Regular Truncated Pyramids with Rectangular Bases
Integral Regular Truncate Pyramis with Rectangular Bases Konstantine Zelator Department of Mathematics 301 Thackeray Hall University of Pittsburgh Pittsburgh, PA 1560, U.S.A. Also: Konstantine Zelator
More informationMODELLING OF TWO STRATEGIES IN INVENTORY CONTROL SYSTEM WITH RANDOM LEAD TIME AND DEMAND
art I. robobabilystic Moels Computer Moelling an New echnologies 27 Vol. No. 23 ransport an elecommunication Institute omonosova iga V9 atvia MOEING OF WO AEGIE IN INVENOY CONO YEM WIH ANOM EA IME AN
More informationThe Inefficiency of Marginal cost pricing on roads
The Inefficiency of Marginal cost pricing on roas Sofia GrahnVoornevel Sweish National Roa an Transport Research Institute VTI CTS Working Paper 4:6 stract The economic principle of roa pricing is that
More informationView Synthesis by Image Mapping and Interpolation
View Synthesis by Image Mapping an Interpolation Farris J. Halim Jesse S. Jin, School of Computer Science & Engineering, University of New South Wales Syney, NSW 05, Australia Basser epartment of Computer
More informationDigital barrier option contract with exponential random time
IMA Journal of Applie Mathematics Avance Access publishe June 9, IMA Journal of Applie Mathematics ) Page of 9 oi:.93/imamat/hxs3 Digital barrier option contract with exponential ranom time Doobae Jun
More informationWeek 4  Linear Demand and Supply Curves
Week 4  Linear Deman an Supply Curves November 26, 2007 1 Suppose that we have a linear eman curve efine by the expression X D = A bp X an a linear supply curve given by X S = C + P X, where b > 0 an
More informationM147 Practice Problems for Exam 2
M47 Practice Problems for Exam Exam will cover sections 4., 4.4, 4.5, 4.6, 4.7, 4.8, 5., an 5.. Calculators will not be allowe on the exam. The first ten problems on the exam will be multiple choice. Work
More informationMath 230.01, Fall 2012: HW 1 Solutions
Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The
More informationChapter 2 Review of Classical Action Principles
Chapter Review of Classical Action Principles This section grew out of lectures given by Schwinger at UCLA aroun 1974, which were substantially transforme into Chap. 8 of Classical Electroynamics (Schwinger
More information9.5. Interference of Light Waves: Young s doubleslit Experiment
Interference of Light Waves: Young s oubleslit Experiment At the en of the 1600s an into the 1700s, the ebate over the nature of light was in full swing. Newton s theory of light particles face challenges
More informationLECTURE 15: LINEAR ARRAY THEORY  PART I
LECTURE 5: LINEAR ARRAY THEORY  PART I (Linear arrays: the twoelement array; the Nelement array with uniform amplitue an spacing; broa  sie array; enfire array; phase array). Introuction Usually the
More informationUniversal Gravity Based on the Electric Universe Model
Universal Gravity Base on the Electric Universe Moel By Frerik Nygaar frerik_nygaar@hotmail.com Nov 015, Porto Portugal. Introuction While most people are aware of both Newton's Universal Law of Gravity
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying
More informationLABORATORY COMPACTION TEST Application
1 LABORATORY COMPACTION TEST Application Compaction of soil is probably one of the largest contributors to the site ork economy; in other orsit is important. Large sums of money are spent on it every
More informationDEVELOPMENT OF A BRAKING MODEL FOR SPEED SUPERVISION SYSTEMS
DEVELOPMENT OF A BRAKING MODEL FOR SPEED SUPERVISION SYSTEMS Paolo Presciani*, Monica Malvezzi #, Giuseppe Luigi Bonacci +, Monica Balli + * FS Trenitalia Unità Tecnologie Materiale Rotabile Direzione
More informationIntroduction to Integration Part 1: AntiDifferentiation
Mathematics Learning Centre Introuction to Integration Part : AntiDifferentiation Mary Barnes c 999 University of Syney Contents For Reference. Table of erivatives......2 New notation.... 2 Introuction
More informationFactoring Dickson polynomials over finite fields
Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University
More informationThe oneyear nonlife insurance risk
The oneyear nonlife insurance risk Ohlsson, Esbjörn & Lauzeningks, Jan Abstract With few exceptions, the literature on nonlife insurance reserve risk has been evote to the ultimo risk, the risk in the
More informationA Case Study of Applying SOM in Market Segmentation of Automobile Insurance Customers
International Journal of Database Theory an Application, pp.2536 http://x.oi.org/10.14257/ijta.2014.7.1.03 A Case Stuy of Applying SOM in Market Segmentation of Automobile Insurance Customers Vahi Golmah
More information2 HYPERBOLIC FUNCTIONS
HYPERBOLIC FUNCTIONS Chapter Hyperbolic Functions Objectives After stuying this chapter you shoul unerstan what is meant by a hyperbolic function; be able to fin erivatives an integrals of hyperbolic functions;
More informationPractical Lab 2 The Diffraction Grating
Practical Lab 2 The Diffraction Grating OBJECTIVES: 1) Observe the interference pattern prouce when laser light passes through multipleslit grating (a iffraction grating). 2) Graphically verify the wavelength
More informationRobust Reading of Ambiguous Writing
In submission; please o not istribute. Abstract A given entity, representing a person, a location or an organization, may be mentione in text in multiple, ambiguous ways. Unerstaning natural language requires
More informationStudy on the Price Elasticity of Demand of Beijing Subway
Journal of Traffic an Logistics Engineering, Vol, 1, No. 1 June 2013 Stuy on the Price Elasticity of Deman of Beijing Subway Yanan Miao an Liang Gao MOE Key Laboratory for Urban Transportation Complex
More informationStock Market Value Prediction Using Neural Networks
Stock Market Value Preiction Using Neural Networks Mahi Pakaman Naeini IT & Computer Engineering Department Islamic Aza University Paran Branch email: m.pakaman@ece.ut.ac.ir Hamireza Taremian Engineering
More informationA New Evaluation Measure for Information Retrieval Systems
A New Evaluation Measure for Information Retrieval Systems Martin Mehlitz martin.mehlitz@ailabor.e Christian Bauckhage Deutsche Telekom Laboratories christian.bauckhage@telekom.e Jérôme Kunegis jerome.kunegis@ailabor.e
More information6.3 Microbial growth in a chemostat
6.3 Microbial growth in a chemostat The chemostat is a wielyuse apparatus use in the stuy of microbial physiology an ecology. In such a chemostat also known as continuousflow culture), microbes such
More informationnparameter families of curves
1 nparameter families of curves For purposes of this iscussion, a curve will mean any equation involving x, y, an no other variables. Some examples of curves are x 2 + (y 3) 2 = 9 circle with raius 3,
More informationLecture 17: Implicit differentiation
Lecture 7: Implicit ifferentiation Nathan Pflueger 8 October 203 Introuction Toay we iscuss a technique calle implicit ifferentiation, which provies a quicker an easier way to compute many erivatives we
More informationy or f (x) to determine their nature.
Level C5 of challenge: D C5 Fining stationar points of cubic functions functions Mathematical goals Starting points Materials require Time neee To enable learners to: fin the stationar points of a cubic
More informationExploratory Optimal Latin Hypercube Designs for Computer Simulated Experiments
Thailan Statistician July 0; 9() : 793 http://statassoc.or.th Contribute paper Exploratory Optimal Latin Hypercube Designs for Computer Simulate Experiments Rachaaporn Timun [a,b] Anamai Nauom* [a,b]
More informationMathematics Review for Economists
Mathematics Review for Economists by John E. Floy University of Toronto May 9, 2013 This ocument presents a review of very basic mathematics for use by stuents who plan to stuy economics in grauate school
More informationMathematics. Circles. hsn.uk.net. Higher. Contents. Circles 119 HSN22400
hsn.uk.net Higher Mathematics UNIT OUTCOME 4 Circles Contents Circles 119 1 Representing a Circle 119 Testing a Point 10 3 The General Equation of a Circle 10 4 Intersection of a Line an a Circle 1 5 Tangents
More information15.2. FirstOrder Linear Differential Equations. FirstOrder Linear Differential Equations Bernoulli Equations Applications
00 CHAPTER 5 Differential Equations SECTION 5. FirstOrer Linear Differential Equations FirstOrer Linear Differential Equations Bernoulli Equations Applications FirstOrer Linear Differential Equations
More informationDouble Integrals in Polar Coordinates
Double Integrals in Polar Coorinates Part : The Area Di erential in Polar Coorinates We can also aly the change of variable formula to the olar coorinate transformation x = r cos () ; y = r sin () However,
More informationProfit Shifting and Trade Agreements in Imperfectly Competitive Markets
Profit Shifting an Trae Agreements in Imperfectly Competitive Markets Kyle Bagwell Stanfor an NBER Robert W. Staiger Stanfor an NBER March 9, 2009 Abstract When markets are imperfectly competitive, trae
More information19.2. First Order Differential Equations. Introduction. Prerequisites. Learning Outcomes
First Orer Differential Equations 19.2 Introuction Separation of variables is a technique commonly use to solve first orer orinary ifferential equations. It is socalle because we rearrange the equation
More informationModule 2. DC Circuit. Version 2 EE IIT, Kharagpur
Moule 2 DC Circuit Lesson 9 Analysis of c resistive network in presence of one nonlinear element Objectives To unerstan the volt (V ) ampere ( A ) characteristics of linear an nonlinear elements. Concept
More informationSOME NOTES ON THE HYPERBOLIC TRIG FUNCTIONS SINH AND COSH
SOME NOTES ON THE HYPERBOLIC TRIG FUNCTIONS SINH AND COSH Basic Definitions In homework set # one of the questions involves basic unerstaning of the hyperbolic functions sinh an cosh. We will use this
More informationPerformance of 2D versus 3D Topographic Representations for Different Task Types
Performance of versus Topographic Representations for Different s Debra MacIvor Savage, Eric N. Wiebe an Hugh A. Devine North Carolina State University Raleigh, North Carolina In this stuy, a performance
More informationImproving Direct Marketing Profitability with Neural Networks
Volume 9 o.5, September 011 Improving Direct Marketing Profitability with eural etworks Zaiyong Tang Salem State University Salem, MA 01970 ABSTRACT Data mining in irect marketing aims at ientifying the
More informationWeb Appendices of Selling to Overcon dent Consumers
Web Appenices of Selling to Overcon ent Consumers Michael D. Grubb A Option Pricing Intuition This appenix provies aitional intuition base on option pricing for the result in Proposition 2. Consier the
More information11 CHAPTER 11: FOOTINGS
CHAPTER ELEVEN FOOTINGS 1 11 CHAPTER 11: FOOTINGS 11.1 Introuction Footings are structural elements that transmit column or wall loas to the unerlying soil below the structure. Footings are esigne to transmit
More informationCalculating Viscous Flow: Velocity Profiles in Rivers and Pipes
previous inex next Calculating Viscous Flow: Velocity Profiles in Rivers an Pipes Michael Fowler, UVa 9/8/1 Introuction In this lecture, we ll erive the velocity istribution for two examples of laminar
More informationImplementing IP Traceback in the Internet An ISP Perspective
Implementing IP Traceback in the Internet An ISP Perspective Dong Wei, Stuent Member, IEEE, an Nirwan Ansari, Senior Member, IEEE AbstractDenialofService (DoS) attacks consume the resources of remote
More informationDi usion on Social Networks. Current Version: June 6, 2006 Appeared in: Économie Publique, Numéro 16, pp 316, 2005/1.
Di usion on Social Networks Matthew O. Jackson y Caltech Leeat Yariv z Caltech Current Version: June 6, 2006 Appeare in: Économie Publique, Numéro 16, pp 316, 2005/1. Abstract. We analyze a moel of i
More informationTO DETERMINE THE SHELF LIFE OF IBUPROFEN SOLUTION
TO DETERMINE THE SHELF LIFE OF IBUPROFEN SOLUTION AIM: To etermine the shelf life of solution using accelerate stability stuies. MATERIAL REQUIREMENT, ethanol, phenolphthalein, soium hyroxie (0.1N), glass
More informationSection 3.3. Differentiation of Polynomials and Rational Functions. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section 3.3 Differentiation of Polynomials an Rational Functions In tis section we begin te task of iscovering rules for ifferentiating various classes of
More informationCost Efficient Datacenter Selection for Cloud Services
Cost Efficient Datacenter Selection for Clou Services Hong u, Baochun Li henryxu, bli@eecg.toronto.eu Department of Electrical an Computer Engineering University of Toronto Abstract Many clou services
More informationModelling and Resolving Software Dependencies
June 15, 2005 Abstract Many Linux istributions an other moern operating systems feature the explicit eclaration of (often complex) epenency relationships between the pieces of software
More informationWeb Appendices to Selling to Overcon dent Consumers
Web Appenices to Selling to Overcon ent Consumers Michael D. Grubb MIT Sloan School of Management Cambrige, MA 02142 mgrubbmit.eu www.mit.eu/~mgrubb May 2, 2008 B Option Pricing Intuition This appenix
More informationA LIQUIDITYPROFITABILITY TRADEOFF MODEL FOR WORKING CAPITAL MANAGEMENT MIHIR DASH 1 RANI HANUMAN
A LIQUIDITYPROFITABILITY TRADEOFF MODEL FOR WORKING CAPITAL MANAGEMENT MIHIR DASH RANI HANUMAN ABSTRACT This paper proposes a goal programming moel for working capital management. Goal programming is
More informationRules for Finding Derivatives
3 Rules for Fining Derivatives It is teious to compute a limit every time we nee to know the erivative of a function. Fortunately, we can evelop a small collection of examples an rules that allow us to
More informationCompact Form of Expressions for Inductance Calculation of Meander Inductors
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 1, No. 3, November 2004, 5768 Compact Form of Expressions for Inuctance Calculation of Meaner Inuctors Goran Stojanovic 1, Ljiljana Živanov 2, Mirjana Damjanovic
More informationMeasures of distance between samples: Euclidean
4 Chapter 4 Measures of istance between samples: Eucliean We will be talking a lot about istances in this book. The concept of istance between two samples or between two variables is funamental in multivariate
More informationThroughputScheduler: Learning to Schedule on Heterogeneous Hadoop Clusters
ThroughputScheuler: Learning to Scheule on Heterogeneous Haoop Clusters Shehar Gupta, Christian Fritz, Bob Price, Roger Hoover, an Johan e Kleer Palo Alto Research Center, Palo Alto, CA, USA {sgupta, cfritz,
More informationA BlameBased Approach to Generating Proposals for Handling Inconsistency in Software Requirements
International Journal of nowlege an Systems Science, 3(), 7, JanuaryMarch 0 A lamease Approach to Generating Proposals for Hanling Inconsistency in Software Requirements eian Mu, Peking University,
More informationJitter effects on Analog to Digital and Digital to Analog Converters
Jitter effects on Analog to Digital an Digital to Analog Converters Jitter effects copyright 1999, 2000 Troisi Design Limite Jitter One of the significant problems in igital auio is clock jitter an its
More informationISSN: 22773754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 12, June 2014
ISSN: 77754 ISO 900:008 Certifie International Journal of Engineering an Innovative echnology (IJEI) Volume, Issue, June 04 Manufacturing process with isruption uner Quaratic Deman for Deteriorating Inventory
More informationAnswers to the Practice Problems for Test 2
Answers to the Practice Problems for Test 2 Davi Murphy. Fin f (x) if it is known that x [f(2x)] = x2. By the chain rule, x [f(2x)] = f (2x) 2, so 2f (2x) = x 2. Hence f (2x) = x 2 /2, but the lefthan
More informationState of Louisiana Office of Information Technology. Change Management Plan
State of Louisiana Office of Information Technology Change Management Plan Table of Contents Change Management Overview Change Management Plan Key Consierations Organizational Transition Stages Change
More information20122013 Enhanced Instructional Transition Guide Mathematics Algebra I Unit 08
01013 Enhance Instructional Transition Guie Unit 08: Exponents an Polynomial Operations (18 ays) Possible Lesson 01 (4 ays) Possible Lesson 0 (7 ays) Possible Lesson 03 (7 ays) POSSIBLE LESSON 0 (7 ays)
More informationThe Binomial and BlackScholes Option Pricing Models: A Pedagogical Review with Vba Implementation F.T. Oduro 1, V.K. Dedu 2
The Binomial an BlackScholes Option Pricing Moels: A Peagogical Review with Vba Implementation F.T. Ouro 1, V.K. Deu 2 12 Department o Mathematics, Kwame Nkrumah University o Science an Technology, Kumasi,
More informationTowards a Framework for Enterprise Architecture Frameworks Comparison and Selection
Towars a Framework for Enterprise Frameworks Comparison an Selection Saber Aballah Faculty of Computers an Information, Cairo University Saber_aballah@hotmail.com Abstract A number of Enterprise Frameworks
More informationImage compression predicated on recurrent iterated function systems **
1 Image compression preicate on recurrent iterate function systems ** W. Metzler a, *, C.H. Yun b, M. Barski a a Faculty of Mathematics University of Kassel, Kassel, F. R. Germany b Faculty of Mathematics
More informationStress Concentration Factors of Various Adjacent Holes Configurations in a Spherical Pressure Vessel
5 th Australasian Congress on Applie Mechanics, ACAM 2007 1012 December 2007, Brisbane, Australia Stress Concentration Factors of Various Ajacent Holes Configurations in a Spherical Pressure Vessel Kh.
More informationUnsteady Flow Visualization by Animating EvenlySpaced Streamlines
EUROGRAPHICS 2000 / M. Gross an F.R.A. Hopgoo Volume 19, (2000), Number 3 (Guest Eitors) Unsteay Flow Visualization by Animating EvenlySpace Bruno Jobar an Wilfri Lefer Université u Littoral Côte Opale,
More information