CHAPTER 5 Round-off errors
|
|
|
- Ferdinand Walker
- 9 years ago
- Views:
Transcription
1 CHAPTER 5 Round-off errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any given integer only has a finite number of digits, we can represent all integers below a certain limit exactly. Non-integer numbers are considerably more cumbersome since infinitely many digits are needed to represent most of them, regardless of which numeral system we employ. This means that most non-integer numbers cannot be represented in a computer without committing an error which is usually referred to as round-off error or rounding error. The standard computer representation of non-integer numbers is as floating point numbers with a fixed number of bits. Not only is an error usually committed when a number is represented as a floating point number; most calculations with floating point numbers will induce further round-off errors. In most situations these errors will be small, but in a long chain of calculations there is a risk that the errors may accumulate and seriously pollute the final result. It is therefore important to be able to recognise when a given computation is going to be troublesome or not so that we may know whether the result can be trusted. In this chapter we will start our study of round-off errors. We will see how round-off errors arise and study how it affects evaluation of mathematical functions. For this we need to to introduce a way to measure the error, and we need to understand how arithmetic is performed. 5.1 Measuring the error Suppose we have a number a and an approximation ã. We are going to consider two ways to measure the error in such an approximation. 83
2 5.1.1 Absolute error The first error measure is the most obvious, it is essentially the difference between a and ã. Definition 5.1 (Absolute error). Suppose ã is an approximation to the number a. The absolute error in this approximation is given by a ã. If a = 100 and ã = the absolute error is 0.1, whereas if a = 1 and ã = 1.1 the absolute error is still 0.1. Note that if a is an approximation to ã, then ã is an equally good approximation to a with respect to the absolute error Relative error For some purposes the absolute error may be what we want, but in many cases it is reasonable to say that the error is smaller in the first example above than in the second, since the numbers involved are bigger. The relative error takes this into account. Definition 5.2 (Relative error). Suppose that ã is an approximation to the nonzero number a. The relative error in this approximation is given by a ã. a We note that the relative error is obtained by scaling the absolute error with the size of the number that is approximated. If we compute the relative errors in the approximations above we find that it is when a = 100 and ã = 100.1, while when a = 1 and ã = 1.1 the relative error is 0.1. In contrast to the absolute error, we see that the relative error tells us that the approximation in the first case is much better than in the latter case. In fact we will see in a moment that the relative error gives a good measure of the number of digits that a and ã have in common. We use concepts which are closely related to absolute and relative errors in many everyday situations. One example is profit from investments, like bank accounts. Suppose you have invested some money and after one year, your profit is 100 (in your local currency). If your initial investment was 100, this is a very good profit in one year, but if your initial investment was it is not so impressive. If we let a denote the initial investment and ã the investment after one year, we see that, apart from the sign, the profit of 100 corresponds to the absolute error in the two cases. On the other hand, the relative error corresponds to 84
3 a a Figure 5.1. In the figure to the left the red parts of the line indicate the numbers which have two digits in common with a, while the numbers in the blue part have three digits in common with a. The figure on the right is just a magnification of the figure on the left. profit measured in %, again apart from the sign. This says much more about how good the investment is since the profit is compared to the initial investment. In the two cases here, we find that the profit in % is 1 = 100% in the first case and 0.01 = 1% in the second. Another situation where we use relative measures is when we give the concentration of an ingredient within a substance. If for example the fat content in milk is 3.5 % we know that a grams of milk will contain 0.035a grams of fat. In this case ã denotes how many grams that are not fat. Then the difference a ã is the amount of fat and the equation a ã = 0.035a can be written a ã = a which shows the similarity with relative error Properties of the relative error The examples above show that we may think of the relative error as the concentration of the error a ã in the total volume a. However, this interpretation can be carried further and linked to the number of common digits in a and ã: If the size of the relative error is approximately 10 m, then a and ã have roughly m digits in common. If the relative error is r, this corresponds to log 10 r being approximately m. Observation 5.3 (Relative error and significant digits). Let a be a nonzero number and suppose that ã is an approximation to a with relative error a ã r = 10 m (5.1) a where m is an integer. Then roughly the m most significant decimal digits of a and ã agree. Figure 5.1 shows how numbers with two and three digits in common with a given number are located along the real line. 85
4 a ã r Table 5.1. Some numbers a with approximations ã and corresponding relative errors r. Sketch of proof. Since a is nonzero, it can be written as a = α 10 n where α is a number in the range 1 α < 10 that has the same decimal digits as a, and n is an integer. The approximation ã be be written similarly as ã = α 10 n and the digits of α are exactly those of ã. Our job is to prove that roughly the first m digits of a and ã agree. Let us insert the alternative expressions for a and ã in (5.1). If we cancel the common factor 10 n and multiply by α, we obtain α α α 10 m. (5.2) Since α is a number in the interval [1,10), the right-hand side is roughly a number in the interval [10 m,10 m+1 ). This means that by subtracting α from α, we cancel out the digit to the left of the decimal point, and m 1 digits to the right of the decimal point. But this means that the m most significant digits of α and α agree. Some examples of numbers a and ã with corresponding relative errors are shown in Table 5.1. In the first case everything works as expected. In the second case the rule only works after a and ã have been rounded to two digits. In the third example there are only three common digits even though the relative error is roughly 10 4, although the fourth digit is only off by one unit. Similarly, in the last example, the relative error is approximately 10 5, but a and ã only have four digits in common, with the fifth digits differing by two units. The last two examples illustrate that the link between relative error and significant digits is just a rule of thumb. If we go back to the proof of observation 5.3, we note that as the left-most digit in a (and α) becomes larger, the righthand side of (5.2) also becomes larger. This means that the number of common digits may become smaller, especially when the relative error approaches 10 m+1 as well. In spite of this, observation 5.3 is a convenient rule of thumb. Observation 5.3 is rather vague when it just assumes that r 10 m. The basis for making this more precise is the fact that m is equal to log 10 r, rounded to 86
5 the nearest integer. This means that m is characterised by the inequalities and this is in turn equivalent to m 0.5 < log 10 r m r = ρ 10 m, where 1 10 < ρ < 10. The absolute error has the nice property that if ã is a good approximation to a, then a is an equally good approximation to ã. The relative error has a similar, but more complicated property. Lemma 5.4 (Reciprocal relative error). Suppose that ã is an approximation of a with relative error r < 1. Then a is an approximation of ã with relative error a ã r ã 1 r. (5.3) Proof. We first observe that since a ã 1 > r = = 1 ã, a a both a and ã must have the same sign, for otherwise the right-hand side in this inequality would be greater than 1. On the other hand we also have from the triangle inequality that or 1 ã/a 1 ã/a. We therefore have 1 = 1 ã a + ã 1 ã + ã (5.4) a a a 1 1 r = ã/a 1 ã/a = a ã. From this we conclude that r 1 1 r ã ã = ã a a a 1 a ã =. ã The condition r < 1 is essential in Lemma 5.4. An example with r = 1 is the situation where ã = 0 is an approximation to some arbitrary, nonzero real number a. Since r = 1 corresponds to such an abnormal case, the condition r < 1 cannot be very restrictive. 87
6 It is worth pointing out that the inequality in (5.3) often is equality. In fact, if ã a, then there is equality in (5.4) and therefore also in (5.3). But even if a < ã, the inequality in (5.4) is very close to equality for small r. Therefore, as long as ã is a reaonable approximation of a, the inequality in (5.3) can be considered an equality. In this situation we also have 1 r 1 so the right-hand side of (5.3) is close to r. In other words, if ã is a good approximation to a, it does not matter much whether we consider ã to be an approximation to a or vice versa, even when we use a relative error measure. 5.2 Errors in integer arithmetic Integers and integer arithmetic on a computer is simple both to understand and analyse. All integers up to a certain size are represented exactly and arithmetic with integers is exact. The only thing that can go wrong is that the result becomes larger than what is representable by the type of integer used. The computer hardware, which usually represents integers with two s complement (see section 4.1.3), will not protest and just wrap around from the largest positive integer to the smallest negative integer or vice versa. As was mentioned in chapter 4, different programming languages handle overflow in different ways. Most languages leave everything to the hardware which means that overflow is not reported, even though it leads to completely wrong results. This kind of behaviour is not really problematic since it is usually easy to detect (the results are completely wrong), but it may be difficult to understand what went wrong unless you are familiar with the basics of two s complement. Other languages, like Python, gracefully switch to higher precision. In this case, integer overflow is not serious, but may reduce the computing speed. Other error situations, like division by zero, or attempts to extract the square root of negative numbers, lead to error messages and are therefore not serious. The conclusion is that errors in integer arithmetic are not serious, at least not if you know a little bit about how integer arithmetic is performed. 5.3 Errors in floating point arithmetic Errors in floating point arithmetic are much more subtle than errors in integer arithmetic. In contrast to integers, floating point numbers can be just a little bit wrong. A result that appears to be reasonable may therefore contain errors, and it may be difficult to judge how large the error is. In this section we will discuss the potential for errors in the four elementary operations, addition, subtraction, multiplication, and division with floating point numbers. Perhaps a bit surprisingly, the conclusion will be that the most 88
7 critical operation is addition/subtraction, which in certain situations may lead to dramatic errors. A word of warning is necessary before we continue. In this chapter, and in particular in this section, where we focus on the shortcomings of computer arithmetic, some may conclude that round-off errors are so dramatic that we had better not use a computer for serious calculations. This is a misunderstanding. The computer is an extremely powerful tool for numerical calculations that you should use whenever you think it may help you, and most of the time it will give you answers that are much more accurate than you need. However, you should be alert and aware of the fact that in certain cases errors may cause considerable problems. The essential features of round-off errors are present if we use the same model of floating point numbers as in section 4.2. Recall that any real number a may be written as a normalised decimal number α 10 n, where the number α is in the range [0.1,1) and is called the significand, while the integer n is called the exponent. Fact 5.5 (Model of floating point numbers). Most of the shortcomings of floating point arithmetic become visible in a computer model that uses 4 digits for the significand, 1 digit for the exponent plus an optional sign for both the significand and the exponent. Two typical normalised (decimal) numbers in this model are and The examples in this section will use this model, but the results can be generalised to a floating point model in base β, see exercise 3. On most computers, floating point numbers are represented, and arithmetic performed according to, the IEEE 1 standard. This is a carefully designed suggestion for how floating point numbers should behave, and is aimed at providing good control of round-off errors and prescribing the behaviour when numbers reach the limits of the floating point model. Regardless of the particular details of how the floating point arithmetic is performed, however, the use of floating point numbers inevitably leads to problems, and in this section we will consider some of those. 1 IEEE is an abbreviation for Institute of Electrical and Electronic Engineers which is a large professional society for engineers and scientists in the USA. The floating point standard is described in a document called IEEE standard reference
8 One should be aware of the fact the IEEE standard is extensive and complicated, and far from all computers and programming languages support the whole standard. So if you are going to do serious floating point calculations you should check how much of the IEEE standard that is supported in your environment. In particular, you should realise that if you move your calculations from one environment to another, the results may change slightly because of different implementations of the IEEE standard Errors in floating point representation Recall from chapter 3 that most real numbers cannot be represented exactly with a finite number of decimals. In fact, there are only a finite number of floating point numbers in total. This means that very few real numbers can be represented exactly by floating point numbers, and it is useful to have a bound on how much the floating point representation of a real number deviates from the number itself. Lemma 5.6. Suppose that a is a nonzero real number within the range of base-β normalised floating point numbers with an m digit significand, and suppose that a is represented by the nearest floating point number ã. Then the relative error in this approximation is at most a ã 5β m. (5.5) a Proof. For simplicity we first do the proof in the decimal numeral system. We write a as a normalised decimal number, a = α 10 n, where α is a number in the range [0.1,1). The floating point approximation ã can also be written as ã = α 10 n, where α is α rounded to m digits. Suppose for example that m = 4 and α = Then the absolute value of the significand α of the original number must lie in the range [ , ), so in this case the largest possible distance between α and α is 5 units in the fifth decimal, i.e., the error is bounded by = A similar argument shows that in the general decimal case we have α α m. 90
9 The relative error is therefore bounded by α α a m 10 n α 10 n m = 5 10 m α where the last inequality follows since α 0.1. Although we only considered normalised numbers in bases 2 and 10 in chapter 4, normalised numbers may be generalised to any base. In base β the distance between α and α is at most 0.5ββ m 1 = 0.5β m, see exercise 3. Since we also have α β 1, an argument completely analogous to the one in the decimal case proves (5.5). Note that lemma 5.6 states what we already know, namely that a and ã have roughly m 1 or m digits in common. Floating point numbers on most computers use binary representation, and we know that in the binary numeral system all fractions a/b where b = 2 k for some positive integer k can be represented exactly, see lemma This means that numbers like 1/2, 1/4 and 5/16 are represented exactly. On the other hand, it is easy to forget that numbers like 0.1 and 0.43 are not represented exactly Floating point errors in addition/subtraction We are now ready to see how round-off errors affect the arithmetic operations. As we all know a b = a + ( b) so in the following it is sufficient to consider addition of numbers. The basic procedure for adding floating point numbers is simple (in reality it is more involved than what is stated here). Algorithm 5.7. To add two real numbers on a computer, the following steps are performed: 1. The number with largest absolute value is written in normalised form a = α 10 n, and the other number b is written as b = β 10 n with the same exponent as a and the same number of digits for the significand β. 2. The significands α and β are added, γ = α + β 3. The result c = γ 10 n is converted to normalised form. 91
10 This apparently simple algorithm contains a serious pitfall which is most easily seen from some examples. Let us first consider a situation where everything works out nicely. Example 5.8. Suppose that a = and b = We convert the numbers to normal form and obtain a = , b = We add the two significands = so the correct answer is The last step is to convert this to normal form. In exact arithmetic this would yield the result However, this is not in normal form since the significand has five digits. We therefore perform rounding and get the final result The relative error in this result is roughly Example 5.8 shows that we easily get an error when normalised numbers are added and the result converted to normalised form with a fixed number of digits for the mantissa. In this first example all the digits of the result are correct, so the error is far from serious. Example 5.9. If a = and b = we convert the largest number to normal form = The smaller number b is then written in the same form (same exponent) = The significand in this second number must be rounded to four digits, and the result of this is The addition therefore becomes = Here the relative error is approximately The error in example 5.9 may seem serious, but once again the result is correct to four decimal digits, which is the best we can hope for when we only have this number of digits available. Although there have been errors in the two previous examples, it has not been dramatic. This is reflected in the relative errors which are close to This confirms that there should be about 4 correct digits in the results according to observation
11 Example Consider next a case where a = and b = have opposite signs. We first rewrite the numbers in normal form a = , b = We then add the significands, which really amounts to a subtraction, = Finally, we write the number in normal form and obtain In this case the relative error is 0. a + b = = (5.6) Example 5.10 may seem innocent since the result is exact, but in fact it contains the seed for serious problems. A similar example will reveal what may easily happen. Example Suppose that a = 10/7 and b = Conversion to normal form yields 10 7 a = , b = Adding the significands yield = When this is converted to normal form, the result is while the true result rounded to four correct digits is The relative error is roughly (5.7) In example 5.11 there is a serious problem: We started with two numbers with full four digit accuracy, but the computed result had only one correct digit. In other words, we lost almost all accuracy when the subtraction was performed. The potential for this loss of accuracy was present in example 5.10 where we also had to add digits in the last step (5.6), but there we were lucky in that the added digits were correct. In example 5.11 however, there would be no way for a computer to know that the additional digits to be added in (5.7) should be 93
12 taken from the decimal expansion of 10/7. Note how the bad loss of accuracy in example 5.11 is reflected in the relative error. One could hope that what happened in example 5.11 is exceptional; after all example 5.10 worked out very well. This is not the case. It is example 5.10 that is exceptional since we happened to add the correct digits to the result in (5.6). This was because the numbers involved could be represented exactly in our decimal model. In example 5.11 one of the numbers was 10/7 which cannot be represented exactly, and this leads to problems. Our observations can be summed up as follows. Observation Suppose the k most significant digits in the two numbers a and b are the same. Then k digits may be lost when the subtraction a b is performed with algorithm 5.7. This observation is very simple: If we start out with m correct digits in both a and b, and the k most significant of those are equal, they will cancel in the subtraction. When the result is normalised, the missing k digits will be filled in from the right. These new digits are almost certainly wrong and hence the computed result will only have m k correct digits. This effect is called cancellation and is a very common source of major round-off problems. The moral is: Be on guard when two almost equal numbers are subtracted. In practice, a computer works in the binary numeral system. However, the cancellation described in observation 5.12 is just as problematic when the numbers are represented as normalised binary numbers. Example 5.9 illustrates another effect that may cause problems in certain situations. If we add a very small number ɛ to a nonzero number a the result may become exactly a. This means that a test like may in fact become true. if a = a + ɛ Observation Suppose that the floating point model in base β uses m digits for the significand and that a is a nonzero floating point number. The addition a + ɛ will be computed as a if ɛ < 0.5β m a. (5.8) 94
13 The exact factor to be used on the right in (5.8) depends on the details of the arithmetic. The factor 0.5β m used here corresponds to rounding to the nearest m digits. A general observation to made from the examples is simply that there are almost always small errors in floating point computations. This means that if you have computed two different floating point numbers a and ã that from a strict mathematical point of view should be equal, it is very risky to use a test like if a = ã since it is rather unlikely that this will be true. One situation where this problem may occur is in a for loop like for x = 0.0, 0.1,..., 1.0 What happens behind the scenes is that 0.1 is added to x each time we pass through the loop, and the loop stops when x becomes larger than 1.0. The last time the result may become 1 + ɛ rather than 1, where ɛ is some small positive number, and hence the last time through the loop with x = 1 may never happen. In fact, for many floating point numbers a and b, even the two computations a + b and b + a give different results! Floating point errors in multiplication and division We usually think of multiplication and division as being more complicated than addition, so one may suspect that these operations may also lead to larger errors for floating point numbers. This is not the case. Example Consider the two numbers a = and b = which in normalised form are a = , b = To multiply the numbers, we multiply the significands and add the exponents, before we normalise the result at the end, if necessary. In our example we obtain a b = The significand in the result must be rounded to four digits and yields the floating point number , i.e., the number Let us also consider an example of division. 95
14 Example We use the same numbers as in example 5.14, but now we perform the division a/b. We have a b = = , i.e., we divide the significands and subtract the exponents. We have / We round this to four digits and obtain the result a b = The most serious problem with floating point arithmetic is loss of correct digits. In multiplication and division this cannot happen, so these operations are quite safe. Observation Floating point multiplication and division do not lead to loss of correct digits as long as the the numbers are within the range of the floating point model. In the worst case, the last digit in the result may be one digit wrong. The essence of observation 5.16 is that the above examples are representative of what happens. However, there are some other potential problems to be aware of. First of all observation 5.16 only says something about one multiplication or division. When many such operations are stringed together with the output of one being fed to the next, the errors may accumulate and become large even when they are very small at each step. We will see examples of this in later chapters. In observation 5.16 there is one assumption, namely that the numbers are within the range of the floating point model. This applies to each of the operands (the two numbers to be multiplied or divided) and the result. When the numbers approach the limits of our floating point model, things will inevitably go wrong. Underflow. Underflow occurs when a positive result becomes smaller than the smallest representable, positive number; the result is then set to 0 (this also happens with negative numbers with small magnitude). In most environments you will get a warning, but otherwise the calculations will continue. This will usually not be a problem, but you should be aware of what happens. 96
15 Overflow. When the magnitude of a result becomes too large for the floating point standard, overflow occurs. This is indicated by the result becoming infinity or possibly positive infinity or negative infinity. There is a special combination of bits in the IEEE standard for these infinity values. When you see infinity appearing in your calculations, you should be aware; the chances are high that the reason is some programming error, or even worse, an error in your algorithm. Note that an operation like a/0.0 will yield infinity when a is a nonzero number. Undefined operations. The division 0.0/0.0 is undefined in mathematics, and in the IEEE standard this will give the result NaN (not a number). Other operations that may produce NaN are square roots of negative numbers, inverse sine or cosine of a number larger than 1, the logarithm of a negative number etc. (unless you are working with complex numbers). NaN is infectious in the sense that NaN combined with a normal number always yields NaN. For example, the result of the operation 1 +NaN will be NaN. 5.4 Rewriting formulas to avoid rounding errors Certain formulas lead to large rounding errors when they are evaluated with floating point numbers. In some cases though, the number can be computed with an alternative formula that is not problematic. In this section we will consider some examples of this. Example Suppose we are going to evaluate the expression 1 x x (5.9) for a large number like x = The problem here is the fact that x and x are almost equal when x is large, x = 10 8 = , x Even with 64-bit floating point numbers the square root will therefore be computed as 10 8, so the denominator in (5.9) will be computed as 0, and we get division by 0. This is a consequence of floating point arithmetic since the two terms in the denominator are not really equal. A simple trick helps us out of this problem. Note that 1 x x = ( x x) ( x x)( x x) = 97 x x x x 2 = x x.
16 This alternative expression can be evaluated for large values of x without any problems with cancellation. The result is where all digits are correct. x x The fact that we were able to find an alternative formula in example 5.17 may seem very special, but it it is not unusual. Here is another example. Example For most values of x, the expression 1 cos 2 x sin 2 x (5.10) can be computed without problems. However, for x = π/4 the denominator is 0, so we get division by 0 since cos x and sin x are equal for this value of x. This means that when x is close to π/4 we will get cancellation. This can be avoided by noting that cos 2 x sin 2 x = cos2x, so the expression (5.10) is equivalent to 1 cos2x. This can be evaluated without problems for all values of x for which the denominator is nonzero, as long as we do not get overflow. 5.5 Summary The main content of this chapter is a discussion of basic errors in computer arithmetic. For integers, the errors are dramatic when they occur and therefore quite easy to detect. For floating point numbers, the error is much more subtle since it is usually not dramatic. This means that it is important to understand and recognise situations and computations that may lead to significant round-off errors. The most dangerous operation is subtraction of two almost equal numbers since this leads to cancellation of digits. We may also get large round-off errors from a long sequence of operations with small round-off errors in step; we will see examples of this in later chapters. Exercises 5.1 Suppose that ã is an approximation to a in the problems below, calculate the absolute and relative errors in each case, and check that the relative error estimates the number of correct digits as predicted by observation 5.3. a) a = 1, ã =
17 b) a = 24, ã = c) a = 1267, ã = d) a = 124, ã = Use the floating point model defined in this chapter with 4 digits for the significand and 1 digit for the exponent, and use algorithm 5.7 to do the calculations below in this model. a) b) c) d) e) 1 + x e x for x = a) Formulate a model for floating point numbers in base β. b) Generalise the rule for rounding of fractional numbers in the decimal system to the octal system and the hexadecimal system. c) Formulate the rounding rule for fractional numbers in the base-β numeral system. 5.4 From the text it is clear that on a computer, the addition ɛ will give the result 1.0 if ɛ is sufficiently small. Write a computer program which can help you to determine the smallest integer n such that n gives the result Consider the simple algorithm x := 0.0; while i 2.0 print x x := x + 0.1; What values of x will be printed? Implement the algorithm in a program and check that the correct values are printed. If this is not the case, explain what happened. 5.6 a) A fundamental property of real numbers is given by the distributive law (x + y)z = xz + y z. (5.11) In this problem you are going to check whether floating point numbers obey this law. To do this you are going to write a program that runs through a loop times and each time draws three random numbers in the interval (0,1) and then checks whether the law holds (whether the two sides of (5.11) are equal) for these numbers. Count how many times the law fails, and at the end, print the percentage of times that it failed. Print also a set of three numbers for which the law failed. b) Repeat (a), but test the associative law (x + y) + z = x + (y + z) instead. c) Repeat (a), but test the commutative law x + y = y + x instead. d) Repeat (a) and (b), but test the associative and commutative laws for multiplication instead. 99
18 5.7 The binomial coefficient ( n) i is defined as ( ) n = i n! i!(n i )! (5.12) where n 0 is an integer and i is an integer in the interval 0 i n. The binomial coefficients turn up in a number of different contexts and must often be calculated on a computer. Since all binomial coefficients are integers (this means that the division in (5.12) can never give a remainder), it is reasonable to use integer variables in such computations. For small values of n and i this works well, but for larger values we may run into problems because the numerator and denominator in (5.12) may become larger than the largest permissible integer, even if the binomial coefficient itself may be relatively small. In many languages this will cause overflow, but even in a language like Python, which on overflow converts to a format in which the size is only limited by the resources available, the performance is reduced considerably. By using floating point numbers we may be able to handle larger numbers, but again we may encounter too big numbers during the computations even if the final result is not so big. An unfortunate feature of the formula (5.12) is that even if the binomial coefficient is small, the numerator and denominator may both be large. In general, this is bad for numerical computations and should be avoided, if possible. If we consider the formula (5.12) in some more detail, we notice that many of the numbers cancel out, ( ) n = i 1 2 i (i + 1) n 1 2 i 1 2 (n i ) = i i n n i. Employing the product notation we can therefore write ( n) i as ( ) n n i i + j =. i j =1 j a) Write a program for computing binomial coefficients based on this formula, and test your method on the coefficients ( ) 9998 = , 4 ( ) = , 70 ( ) 1000 = Why do you have to use floating point numbers and what results do you get? b) Is it possible to encounter too large numbers during those computations if the binomial coefficient to be computed is smaller than the largest floating point number that can be represented by your computer? c) In our derivation we cancelled i! against n! in (5.12), and thereby obtained the alternative expression for ( n) i. Another method can be derived by cancelling (n i )! against n! instead. Derive this alternative method in the same way as above, and discuss when the two methods should be used (you do not need to program the other method; argue mathematically). 100
19 5.8 Identify values of x for which the formulas below may lead to large round-off errors, and suggest alternative formulas which do not have these problems. a) x + 1 x. b) ln x 2 ln(x 2 + x). c) cos 2 x sin 2 x. 5.9 Suppose you are going to write a program for computing the real roots of the quadratic equation ax 2 + bx + c = 0, where the coefficients a, b and c are real numbers. The traditional formulas for the two solutions are x 1 = b b 2 4ac, x 2 = b + b 2 4ac. 2a 2a Identify situations (values of the coefficients) where the formulas do not make sense or lead to large round-off errors, and suggest alternative formulas for these cases which avoid the problems. 101
Binary Number System. 16. Binary Numbers. Base 10 digits: 0 1 2 3 4 5 6 7 8 9. Base 2 digits: 0 1
Binary Number System 1 Base 10 digits: 0 1 2 3 4 5 6 7 8 9 Base 2 digits: 0 1 Recall that in base 10, the digits of a number are just coefficients of powers of the base (10): 417 = 4 * 10 2 + 1 * 10 1
CS321. Introduction to Numerical Methods
CS3 Introduction to Numerical Methods Lecture Number Representations and Errors Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506-0633 August 7, 05 Number in
This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers
This Unit: Floating Point Arithmetic CIS 371 Computer Organization and Design Unit 7: Floating Point App App App System software Mem CPU I/O Formats Precision and range IEEE 754 standard Operations Addition
5.1 Radical Notation and Rational Exponents
Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots
Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit
Decimal Division Remember 4th grade long division? 43 // quotient 12 521 // divisor dividend -480 41-36 5 // remainder Shift divisor left (multiply by 10) until MSB lines up with dividend s Repeat until
Computer Science 281 Binary and Hexadecimal Review
Computer Science 281 Binary and Hexadecimal Review 1 The Binary Number System Computers store everything, both instructions and data, by using many, many transistors, each of which can be in one of two
26 Integers: Multiplication, Division, and Order
26 Integers: Multiplication, Division, and Order Integer multiplication and division are extensions of whole number multiplication and division. In multiplying and dividing integers, the one new issue
Lies My Calculator and Computer Told Me
Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing
1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal:
Exercises 1 - number representations Questions 1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal: (a) 3012 (b) - 435 2. For each of
Measures of Error: for exact x and approximation x Absolute error e = x x. Relative error r = (x x )/x.
ERRORS and COMPUTER ARITHMETIC Types of Error in Numerical Calculations Initial Data Errors: from experiment, modeling, computer representation; problem dependent but need to know at beginning of calculation.
ECE 0142 Computer Organization. Lecture 3 Floating Point Representations
ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g.,
NUMBER SYSTEMS. William Stallings
NUMBER SYSTEMS William Stallings The Decimal System... The Binary System...3 Converting between Binary and Decimal...3 Integers...4 Fractions...5 Hexadecimal Notation...6 This document available at WilliamStallings.com/StudentSupport.html
Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1
Divide: Paper & Pencil Computer Architecture ALU Design : Division and Floating Point 1001 Quotient Divisor 1000 1001010 Dividend 1000 10 101 1010 1000 10 (or Modulo result) See how big a number can be
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
Numerical Solution of Differential
Chapter 13 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions
A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved First Draft February 8, 2006 1 Contents 25
Numerical Analysis I
Numerical Analysis I M.R. O Donohoe References: S.D. Conte & C. de Boor, Elementary Numerical Analysis: An Algorithmic Approach, Third edition, 1981. McGraw-Hill. L.F. Shampine, R.C. Allen, Jr & S. Pruess,
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Differentiation and Integration
This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 Proofs Intuitively, the concept of proof should already be familiar We all like to assert things, and few of us
CSI 333 Lecture 1 Number Systems
CSI 333 Lecture 1 Number Systems 1 1 / 23 Basics of Number Systems Ref: Appendix C of Deitel & Deitel. Weighted Positional Notation: 192 = 2 10 0 + 9 10 1 + 1 10 2 General: Digit sequence : d n 1 d n 2...
To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic:
Binary Numbers In computer science we deal almost exclusively with binary numbers. it will be very helpful to memorize some binary constants and their decimal and English equivalents. By English equivalents
Exponents and Radicals
Exponents and Radicals (a + b) 10 Exponents are a very important part of algebra. An exponent is just a convenient way of writing repeated multiplications of the same number. Radicals involve the use of
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
Numerical Matrix Analysis
Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, [email protected] Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
Figure 1. A typical Laboratory Thermometer graduated in C.
SIGNIFICANT FIGURES, EXPONENTS, AND SCIENTIFIC NOTATION 2004, 1990 by David A. Katz. All rights reserved. Permission for classroom use as long as the original copyright is included. 1. SIGNIFICANT FIGURES
Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 04 Digital Logic II May, I before starting the today s lecture
PURSUITS IN MATHEMATICS often produce elementary functions as solutions that need to be
Fast Approximation of the Tangent, Hyperbolic Tangent, Exponential and Logarithmic Functions 2007 Ron Doerfler http://www.myreckonings.com June 27, 2007 Abstract There are some of us who enjoy using our
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Core Maths C1. Revision Notes
Core Maths C Revision Notes November 0 Core Maths C Algebra... Indices... Rules of indices... Surds... 4 Simplifying surds... 4 Rationalising the denominator... 4 Quadratic functions... 4 Completing the
Number Systems and Radix Conversion
Number Systems and Radix Conversion Sanjay Rajopadhye, Colorado State University 1 Introduction These notes for CS 270 describe polynomial number systems. The material is not in the textbook, but will
Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving
Section 7 Algebraic Manipulations and Solving Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving Before launching into the mathematics, let s take a moment to talk about the words
Quotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
GRE Prep: Precalculus
GRE Prep: Precalculus Franklin H.J. Kenter 1 Introduction These are the notes for the Precalculus section for the GRE Prep session held at UCSD in August 2011. These notes are in no way intended to teach
This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
Radicals - Multiply and Divide Radicals
8. Radicals - Multiply and Divide Radicals Objective: Multiply and divide radicals using the product and quotient rules of radicals. Multiplying radicals is very simple if the index on all the radicals
Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
Answer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
Properties of Real Numbers
16 Chapter P Prerequisites P.2 Properties of Real Numbers What you should learn: Identify and use the basic properties of real numbers Develop and use additional properties of real numbers Why you should
UNDERSTANDING ALGEBRA JAMES BRENNAN. Copyright 2002, All Rights Reserved
UNDERSTANDING ALGEBRA JAMES BRENNAN Copyright 00, All Rights Reserved CONTENTS CHAPTER 1: THE NUMBERS OF ARITHMETIC 1 THE REAL NUMBER SYSTEM 1 ADDITION AND SUBTRACTION OF REAL NUMBERS 8 MULTIPLICATION
In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.
MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target
The string of digits 101101 in the binary number system represents the quantity
Data Representation Section 3.1 Data Types Registers contain either data or control information Control information is a bit or group of bits used to specify the sequence of command signals needed for
Vocabulary Words and Definitions for Algebra
Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms
6.4 Logarithmic Equations and Inequalities
6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.
Lecture 2. Binary and Hexadecimal Numbers
Lecture 2 Binary and Hexadecimal Numbers Purpose: Review binary and hexadecimal number representations Convert directly from one base to another base Review addition and subtraction in binary representations
Unit 1 Number Sense. In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions.
Unit 1 Number Sense In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions. BLM Three Types of Percent Problems (p L-34) is a summary BLM for the material
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard
MATH 10034 Fundamental Mathematics IV
MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
What are the place values to the left of the decimal point and their associated powers of ten?
The verbal answers to all of the following questions should be memorized before completion of algebra. Answers that are not memorized will hinder your ability to succeed in geometry and algebra. (Everything
Algebraic and Transcendental Numbers
Pondicherry University July 2000 Algebraic and Transcendental Numbers Stéphane Fischler This text is meant to be an introduction to algebraic and transcendental numbers. For a detailed (though elementary)
COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012
Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about
1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
1.6 The Order of Operations
1.6 The Order of Operations Contents: Operations Grouping Symbols The Order of Operations Exponents and Negative Numbers Negative Square Roots Square Root of a Negative Number Order of Operations and Negative
CONTENTS 1. Peter Kahn. Spring 2007
CONTENTS 1 MATH 304: CONSTRUCTING THE REAL NUMBERS Peter Kahn Spring 2007 Contents 2 The Integers 1 2.1 The basic construction.......................... 1 2.2 Adding integers..............................
23. RATIONAL EXPONENTS
23. RATIONAL EXPONENTS renaming radicals rational numbers writing radicals with rational exponents When serious work needs to be done with radicals, they are usually changed to a name that uses exponents,
Session 7 Fractions and Decimals
Key Terms in This Session Session 7 Fractions and Decimals Previously Introduced prime number rational numbers New in This Session period repeating decimal terminating decimal Introduction In this session,
Trigonometric Functions and Triangles
Trigonometric Functions and Triangles Dr. Philippe B. Laval Kennesaw STate University August 27, 2010 Abstract This handout defines the trigonometric function of angles and discusses the relationship between
Mathematics. (www.tiwariacademy.com : Focus on free Education) (Chapter 5) (Complex Numbers and Quadratic Equations) (Class XI)
( : Focus on free Education) Miscellaneous Exercise on chapter 5 Question 1: Evaluate: Answer 1: 1 ( : Focus on free Education) Question 2: For any two complex numbers z1 and z2, prove that Re (z1z2) =
The Deadly Sins of Algebra
The Deadly Sins of Algebra There are some algebraic misconceptions that are so damaging to your quantitative and formal reasoning ability, you might as well be said not to have any such reasoning ability.
Week 13 Trigonometric Form of Complex Numbers
Week Trigonometric Form of Complex Numbers Overview In this week of the course, which is the last week if you are not going to take calculus, we will look at how Trigonometry can sometimes help in working
Algebra. Exponents. Absolute Value. Simplify each of the following as much as possible. 2x y x + y y. xxx 3. x x x xx x. 1. Evaluate 5 and 123
Algebra Eponents Simplify each of the following as much as possible. 1 4 9 4 y + y y. 1 5. 1 5 4. y + y 4 5 6 5. + 1 4 9 10 1 7 9 0 Absolute Value Evaluate 5 and 1. Eliminate the absolute value bars from
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know
1.3 Algebraic Expressions
1.3 Algebraic Expressions A polynomial is an expression of the form: a n x n + a n 1 x n 1 +... + a 2 x 2 + a 1 x + a 0 The numbers a 1, a 2,..., a n are called coefficients. Each of the separate parts,
Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.
Algebra 2 - Chapter Prerequisites Vocabulary Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. P1 p. 1 1. counting(natural) numbers - {1,2,3,4,...}
Oct: 50 8 = 6 (r = 2) 6 8 = 0 (r = 6) Writing the remainders in reverse order we get: (50) 10 = (62) 8
ECE Department Summer LECTURE #5: Number Systems EEL : Digital Logic and Computer Systems Based on lecture notes by Dr. Eric M. Schwartz Decimal Number System: -Our standard number system is base, also
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12
CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.
Section 1.1 Linear Equations: Slope and Equations of Lines
Section. Linear Equations: Slope and Equations of Lines Slope The measure of the steepness of a line is called the slope of the line. It is the amount of change in y, the rise, divided by the amount of
6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
6 3 4 9 = 6 10 + 3 10 + 4 10 + 9 10
Lesson The Binary Number System. Why Binary? The number system that you are familiar with, that you use every day, is the decimal number system, also commonly referred to as the base- system. When you
Activity 1: Using base ten blocks to model operations on decimals
Rational Numbers 9: Decimal Form of Rational Numbers Objectives To use base ten blocks to model operations on decimal numbers To review the algorithms for addition, subtraction, multiplication and division
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have
8.6 Rational Exponents 8.6 OBJECTIVES 1. Define rational exponents 2. Simplify expressions containing rational exponents 3. Use a calculator to estimate the value of an expression containing rational exponents
Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.
Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than
Useful Number Systems
Useful Number Systems Decimal Base = 10 Digit Set = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} Binary Base = 2 Digit Set = {0, 1} Octal Base = 8 = 2 3 Digit Set = {0, 1, 2, 3, 4, 5, 6, 7} Hexadecimal Base = 16 = 2
1.7 Graphs of Functions
64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most
8 Primes and Modular Arithmetic
8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
MATH 60 NOTEBOOK CERTIFICATIONS
MATH 60 NOTEBOOK CERTIFICATIONS Chapter #1: Integers and Real Numbers 1.1a 1.1b 1.2 1.3 1.4 1.8 Chapter #2: Algebraic Expressions, Linear Equations, and Applications 2.1a 2.1b 2.1c 2.2 2.3a 2.3b 2.4 2.5
SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS
(Section 0.6: Polynomial, Rational, and Algebraic Expressions) 0.6.1 SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS LEARNING OBJECTIVES Be able to identify polynomial, rational, and algebraic
x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1
Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs
Correctly Rounded Floating-point Binary-to-Decimal and Decimal-to-Binary Conversion Routines in Standard ML. By Prashanth Tilleti
Correctly Rounded Floating-point Binary-to-Decimal and Decimal-to-Binary Conversion Routines in Standard ML By Prashanth Tilleti Advisor Dr. Matthew Fluet Department of Computer Science B. Thomas Golisano
TRIGONOMETRY Compound & Double angle formulae
TRIGONOMETRY Compound & Double angle formulae In order to master this section you must first learn the formulae, even though they will be given to you on the matric formula sheet. We call these formulae
www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates
Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c
Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products
Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing
Linear Programming Notes VII Sensitivity Analysis
Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization
Section V.3: Dot Product
Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,
What Every Computer Scientist Should Know About Floating-Point Arithmetic
What Every Computer Scientist Should Know About Floating-Point Arithmetic D Note This document is an edited reprint of the paper What Every Computer Scientist Should Know About Floating-Point Arithmetic,
Solution for Homework 2
Solution for Homework 2 Problem 1 a. What is the minimum number of bits that are required to uniquely represent the characters of English alphabet? (Consider upper case characters alone) The number of
Linear Equations and Inequalities
Linear Equations and Inequalities Section 1.1 Prof. Wodarz Math 109 - Fall 2008 Contents 1 Linear Equations 2 1.1 Standard Form of a Linear Equation................ 2 1.2 Solving Linear Equations......................
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions
Grade 7 Mathematics, Quarter 1, Unit 1.1 Integer Operations Overview Number of Instructional Days: 15 (1 day = 45 minutes) Content to Be Learned Describe situations in which opposites combine to make zero.
Binary Adders: Half Adders and Full Adders
Binary Adders: Half Adders and Full Adders In this set of slides, we present the two basic types of adders: 1. Half adders, and 2. Full adders. Each type of adder functions to add two binary bits. In order
Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions
Stanford Math Circle: Sunday, May 9, 00 Square-Triangular Numbers, Pell s Equation, and Continued Fractions Recall that triangular numbers are numbers of the form T m = numbers that can be arranged in
