Section 2. Algorithm Complexity
|
|
- Darcy Martin
- 7 years ago
- Views:
Transcription
1 Section 2 Algorithm Complexity
2 Algorithm Complexity This section of the course is aimed at providing a framework for the analysis of algorithms. Constructing such a framework is not a simple task - we need some way to compare the performance of many very different algorithms. The results we end up with should be equally applicable to any platform. It s not sufficient to simply implement all the algorithms we wish to compare in Java, run them on a PC, and use a stopwatch to calculate timings. This will tell us nothing about how the same algorithms might run when written in C++ running on a supercomputer. This means our method of analysis must be Independent of the hardware the algorithm will be run on. (Is it a PC or a Supercomputer?) Independent of the implementation. (Is it written in Java or Visual Basic?) Independent of the type of algorithm input (For a sorting algorithm - are we sorting numbers or words?)
3 Algorithm Analysis Our framework should allow us to make some predictions about our algorithms. In particular it should allow us to predict how efficiently our algorithms will scale e.g. given an analysis of a sorting algorithm we should be able to predict how it will perform given 10, 100 and 1000 items. As we shall see later implementation details and hardware speed are often irrelevant. Our framework will allow us to make design decisions about the appropriateness of one algorithm as opposed to another. Often in computer science there are several useful programming paradigms suitable for solving a problem e.g. divide and conquer, backtracking, greedy programming, dynamic programming. Proper analysis will allow us to decide which paradigm is most suitable when designing an algorithm.
4 Definitions: Basic Operation: Algorithm Analysis The performance of an algorithm depends on the number of basic operations it performs. Worst-Case input: Often when analysing algorithms we tend to take a pessimistic approach. The worst-case input for an algorithm is the input for which the algorithm performs the most basic operations. For example a specific sorting algorithm may perform very well if the input (the numbers) it is given to sort are already partially sorted. It may perform less well if the input is arranged randomly. We generally consider the worst case scenario for every algorithm as this provides a fair (and as it turn out simpler) basis for comparison. Sometimes we will consider the average-case time for an algorithm. This is often far more difficult to evaluate - it often depends on the nature of the input, rather than the algorithm. e.g. On average a sorting algorithm may sort an English passage of words far faster than its worst-case analysis predicts, simply because English contains many short words. Sorting passages in German may result in an average-case time closer to the worst-case.
5 Basic Operations Deciding exactly what a basic operation is can seem a little subjective. For general algorithms like sorting, the basic operation is well defined (in this case the basic operation is comparison). A basic operation may be An addition, multiplication or other arithmetic operation. A simple comparison. A complex series of statements that moves a database record from one part of the database to another. A method which calculates the log of a number. In fact in general the actual specification of the basic operation is not that important. We need only ensure that its performance is roughly constant for any given input. e.g. a method which calculates the factorial of a number would be inappropriate as a basic operation as calculating the factorial of 5 may be far faster than calculating the factorial of 150.
6 A simple example public static void main(string args[]) { int x,y; x= 10; y = Math.Random() % 15; // y is a random number between 0 and 14 while ((x>0)&&(y>0)) { x = x-1; y = y-1; What is the basic operation in this algorithm? What is the worst-case for the algorithm? What is the average-case for the algorithm?
7 A simple example public static void main(string args[]) { int x,y; x= 10; y = Math.Random() % 15; // y is a random number between 0 and 14 while ((x>0)&&(y>0)) { x = x-1; y = y-1; Basic Operation We tend to ignore the time required to do initialisation - this is constant for any given input. The running time of the program obviously depends on how many times the loop goes round. The work done inside the loop forms the basic operation. Initialisation
8 Case Times It should be fairly obvious that the worst-case for this algorithm is 10 basic operations. This happens if the value of y is 10 or more. The loop will always end after 10 iterations. Y can take on values between 0 and 14 and the number of basic operations for each value of y is as follows: y : basic ops : Since each value of y is equally likely the average number of iterations is around 6.33 (95/15). This average analysis would change if y was more likely to have higher values i.e. it depends on the nature or distribution of y. The worst case analysis will not change. Usually we not too concerned with algorithms like this since they have a constant worst case - this sort of thing only worries computer game programmers.
9 Input Size More often we are concerned with algorithms that vary according to input size e.g. public static void simple(int n) { while (n>0) { n = n-1; The worst-case for this algorithm is n basic operations. In general our worst-case analyses will be in terms of the input size. This example is somewhat misleading as input size is determined by a single integer. Some better examples more typical input sizes are: sorting : Input size - number of items to sort : Basic Operation : comparison of two items multiplication : Input size - number of digits in both numbers : basic operation - single digit multiplication
10 Input Size Database searching Maze solving : Input size - No. of database records : basic operation - checking a record : Input size - No of paths between junctions : basic operation - checking for next junction or finish. Working with basic operations We need some rules or operators which allow us to work out how many basic operations are performed in the worst-case of an algorithm. Sequences : The time taken to execute a sequence of basic operations is simply the sum of the time taken to execute each of the operations. T(sequence(b1;b2;b3)) = T(b1)+T(b2)+T(b3) - usually we roll these into a single basic operation. Alternatives : When we have an if statement which will perform one basic operation or another, we calculate the time as the longest basic operation T( if (cond) b1 else b2) = Max(T(b1),T(b2))
11 Calculating times Iteration - when we have a loop the running time depends on the time of the basic operation and the worst-case number of iterations. T(loop) = T(b1) * worst-case number of iterations Exercise : Identify the basic operations and input sizes in the following segments of code. Determine the running time. Segment 1- int n=5; while(n>0) n=n-1; Segment 2 - public static void algor(int n) { int x=5; if (n>0) else x=x+5; x=x-5
12 Segment 3 public static void algor(int n) { int x=5; while (n>0) { n=n-1; if (n%2==0) x=x+5; else x=x-5 Segment 4 public static void algor(int n) { int x=5; if (n %2==0) while (n>0) { n=n-1; if (n%2==0) x=x+5; else x=x-5 Calculating times
13 Calculating times Segment 1- int n=5; while(n>0) n=n-1; Basic Operation : n=n-1 - we will call this b1 Time to execute b1 : T(b1) - This is just notation Worst case number of iterations - loop is performed at most 5 times Time = T(b1) * 5 Input Size : In the above expression all the elements are constant therefore this piece of code has a constant running time. Input size would not have any effect on this code.
14 Calculating times Segment 2 public static void algor(int n) { int x=5; if (n>0) else x=x+5; x=x-5; Basic operation - b1 Basic operation - b2 With an if statement we assume the worst. We know one alternative or the other must be executed so at worst the running time is the slowest of the two alternatives. Time = Max(T(b1), T(b2)) This piece of code has no loop - again the running time just contains constant elements. Input size has no effect.
15 Calculating times Segment 3 public static void algor(int n) { int x=5; while (n>0) { n=n-1; if (n%2==0) x=x+5; else x=x-5 Basic operation - b1 Basic operation - b2 Basic operation - b3 Working from the inside out - The if statement = Max(T(b2),T(b3)). This is in sequence with basic operation b1. The total time to perform the operations inside the loops is T(b1) + Max(T(b2),T(b3)) The loop depends on the value of n - It will perform at most n iterations Time = n * [T(b1) + Max(T(b2),T(b3))] (we assume n>=0) This equation depends on n - as n gets larger the time increases. The input size is n.
16 Calculating Times Segment 4 public static void algor(int n) { int x=5; if (n%2==0) while (n>0) { n=n-1; if (n%2==0) x=x+5; else x=x-5 //basic operation b1 //basic operation b2 //basic operation b3 As before, time for the loop is Time = n * [T(b1) + Max(T(b2),T(b3))] The loop is enclosed inside an if statement. This if statement either gets executed or it doesn t - this implies the running time is Max(n * [T(b1) + Max(T(b2),T(b3))], 0) Obviously this is equivalent to n * [T(b1) + Max(T(b2),T(b3))] This equation depends on n - as n gets larger the time increases. The input size is n.
17 Input Size As mentioned previously we are not concerned with algorithms which have a constant running time. A constant running time algorithm is the best kind of algorithm we could hope to use to solve a problem. Code segments 1 and 2 both have constant running times. We could not possibly substitute an alternative algorithm which would be generally faster. When we reach a situation like this improvements in speed depends purely on how well a piece of code is implemented - a purely mundane task. Code segments 3 and 4 both had a running time which was proportional to n. (Some constant times n). With these problems if n doubles the running time doubles. When we attempt to analyse algorithms in a general sense it is the dependency on input size which is most important.
18 Input Size Dependency The following table illustrates 4 different algorithms and the worst number of basic operations they perform Input Size Algorithm 1 Algorithm 2 Algorithm 3 Algorithm If we assume that each algorithm performs the same basic operations - and that a typical computer can execute 1 million basic operations a second then we get the following execution times.
19 Input Size Dependency Input Size Algorithm 1 Algorithm 2 Algorithm 3 Algorithm µs 2µs 4µs 8µs ms 200µs 40µs 11.32µs years 20ms 0.4 ms 14.64µs years 2s 4 ms 17.97µs These algorithms have running times proportional to 2 n, n 2, n and log 2 (n) for input size n. These figures should illustrate the danger of choosing an optimal algorithm by simply implementing several algorithms to solve a problem and choosing the fastest. If we were to choose based on an input size of 1 then we would probably choose the first one!
20 Big-O Notation It should be clear from the previous example that the efficiency of an algorithm depends primarily on the proportionality of the running time to the input size. This is the framework we have been searching for all along. It is independent of implementation details, hardware specifics and type of input. Our task when performing an analysis is to describe how the running time of a given algorithm is proportional to the algorithms input size. i.e. we make statements like - the running time of this algorithm will quadruple when the input size doubles or the running time is 5 times the log of the input size. Statement like these can be open to different interpretations - since algorithm analysis is so central to Computer Science a more formal notation exists. This notation is known as Big-O notation.
21 Big-O notation - formal definition Big-O Notation Let f:n->r i.e. f is a mathematical function that maps positive whole numbers to real numbers. E.g. f(n) = 3.3n => f(2) = 6.6, f(4) = 13.2 In fact f abstractly represents a timing function - consider it as a function which returns the running time in seconds for a problem of input size n. Let g:n->r be another of these timing functions There exists values c and n0 (where n0 is the smallest allowable input size) such that f(n) <= c*g(n) when n>=n0 or f(n) is O(g(n)) Less formally - given a problem with input size n (n must be at least as big as our smallest allowable input size n0) our timing function f(n) is always less than a constant times some other timing function g(n) f(n) represents the timing of the algorithm we are trying to analyse. g(n) represents some well known (and often simple) function whose behaviour is very familiar
22 Big-O Notation What s going on? How is this useful? During algorithm analysis we typically have a collection of well known functions whose behaviour given a particular input size is well known. We ve already seen a few of these Linear : g(n) = n - if n doubles running time doubles Quadratic : g(n) = n 2 - if n doubles running time quadruples Logarithmic : g(n) = lg(n) - if n doubles the running time increase by a constant time Exponential : g(n) = 2 n - as n increases the running time explodes. These functions allow us to classify our algorithms. When we say f(n) is O(g(n)) we are saying that given at least some input size f will behave similarly to g(n). Essentially this expresses the idea of an algorithm taking at most some number of basic operations.
23 Big-O Notation Examples: 3n is O(n) 4n +5 is O(n) 4n +2n +1 is O(n) 3n is O(n 2 ) n*max[t(b1),t(b2)] is O(n) n* ( T(b1) + n*t(b2)) is O(n 2 ) The last two examples might represent the running times of two different algorithms. It is obvious that we would prefer our algorithms to be O(n) rather than O(n 2 ). This classification using O-notation allows us to make the appropriate decision and pick the first of the two.
24 Analysis Example Our task is to decide on an algorithm to find the largest number in an array of integers. We ll use two different algorithms and analyse both to see which is computationally more efficient. Algorithm 1: public static int findlargest(int numbers[]) { int largest; boolean foundlarger; foundlarger = true; // Go through the numbers one by one for (int i=0; (i < numbers.length) && (foundlarger); i++) { foundlarger = false; // look for number larger than current one for (int j=0; j< numbers.length; j++) { // This one is larger so numbers[i] can't be the largest if (numbers[j] > numbers[i]) foundlarger = true; // If we didnt find a larger value then current number must be largest if (!foundlarger) largest = numbers[i]; return largest;
25 Algorithm analysis In order to analyse the algorithm we must first identify the basic operations. They appear in bold in the previous slide. We ll label them b1..b5 Usually we tend to ignore the constant times b1 and b5. They re just included here for completeness. Next we have to identify the input size. The input size is simply the number of elements in the array (we ll call this n). In general the input size is always obvious. We have to work out the timing for this algorithm and then use O- notation to decide how to categorise it. Using the rules we ve seen earlier we get Time = T(b1) + n * ( T(b2) + n * Max (T(b3),0) + Max(T(b4),0)) + T(b5) If we eliminate the redundant Max functions and multiply this out we get Time = n 2 *T(b3) + n*(t(b2) + T(b4)) + T(b1) + T(b5)
26 Algorithm Classification Using O-notation : We must try an classify our Time function (f(n)) in terms of one of the standard classification functions (g(n)). recall the definition of O f(n) <= c*g(n), n >= n0 Our time function has a n 2 term in it so it could not possibly be O(n) i.e. linear. We could never pick a constant c that would guarantee n 2 <= c*n. Our time function is O(n 2 ) - Essentially the function is of the form an 2 + bn + d, where a, b and d are just arbitrary constants. We can always choose a sufficiently large value c such that : an 2 + bn + d <= c*n 2 In general if the timing function for an algorithm has the form of a polynomial then we can simply take the highest power in the polynomial and this will give us our g(n).
27 Algorithm Analysis Now we will analyse a different algorithm which performs the same task. public static int findlargest(int numbers[]) { int largest; largest = numbers[0]; // Go through the numbers one by one for (int i=1; i < numbers.length ; i++) { // Is the current number larger than the largest so far? if (numbers[i] > largest) largest = numbers[i]; return largest; A similar analysis to the previous example gives Time = n*t(b2) + T(b1) + T(b3) The highest power of n is 1 this implies this algorithm is O(n) or linear. O(n) is a large improvement on O(n 2 ). This algorithm is much more efficient.
28 Algorithm Classification O-notation is nothing more than a mathematical notation that captures the idea of the slope of an asymptote. Programs with constant running times look like this as input size varies. Time f(n) Input Size The actual position of the line is irrelevant. When we use O-notation we are implicitly performing the following steps. 1. Picture the running time of the algorithm f(n) 2. Calculate the asymptote to the graph of this function as n goes to infinity 3. Ignore the position of the asymptote, move it back to the origin an compare this slope with the slope of standard graphs.
29 Algorithm Analysis Time f(n) Time asymptote Infinite Time Input Size Input Size Constant Time We ve performed the three implicit steps here. Note that the asymptote to a line is just the line itself. Constant running times all map to the black line in the diagram above. An infinite running time would map to the y axis. All other algorithms lie somewhere in between. For the moment we won t include the standard ones. Lets examine how our previous two algorithms would perform. Before we can actually graph the running time we must first give values to the time it takes to execute the basic operations. We ll can just assume they all take 1 unit of time each. The actual value we pick is irrelevant. It won t affect the slope of the asymptote.
30 Algorithm Analysis Algorithm 1 : Time = n 2 *T(b3) + n*(t(b2) + T(b4)) + T(b1) + T(b5) this implies f(n) = n 2 + 2n + 2 Following the three step from before we get Time f(n) Time asymptote Infinite Time Input Size Input Size Constant Time Similarly for algorithm 2 : Time = n*t(b2) + T(b1) + T(b3) this implies f(n) = n+2 Time f(n) Time asymptote Infinite Time Input Size Input Size Constant Time
31 Algorithm Analysis Comparing these directly Algorithm 1 Infinite Time Algorithm 2 Constant Time It is clear that algorithm 1 is less efficient than algorithm 2. The lines for algorithm 1 and algorithm 2 represent the standard lines for O(n) and O(n 2 ).
32 OM-Notation We use OM-notation to describe an algorithm which takes at least some number of basic operations. It represents a lower bound on the worst case running time of an algorithm. OM notation - formal definition Let f:n->r and g:n->r There exists values c and n0 such that f(n) >= c*g(n) when n>=n0 or f(n) is OM(g(n)) Less formally - given a problem with input size n (n must be at least as big as our smallest allowable input size n0) our timing function f(n) is always at least as large as a constant times some other timing function g(n). OM-notation is often more difficult to work out than O-Notation. Examples : 10n is OM(n) 10n +2n 2 is OM(n 2 )
33 THETA-notation Again let f:n->r and g:n->r If f(n) is O(g(n)) and f(n) is OM(g(n)) then we can say f(n) is THETA(g(n)) f(n) is said to be asymptotically equivalent to g(n) Really what this says is that algorithms f(n) and g(n) behave similarly as n grow larger and larger. If we find a specific algorithm is THETA(g(n)) then this is an indication that we cannot improve on the design of this algorithm. Its an indication that it is time to hand the algorithm over to some data monkey to let them improve the implementation. This does not mean that we couldn t replace the algorithm with a better one. It just means that this specific algorithm is as well designed as it could be.
34 Notation Rules f(n) is O(g(n)) if and only if g(n) is OM(f(n)) If f(n) is O(g(n)) then f(n) + g(n) is O(g(n)) f(n) + g(n) is OM(g(n)) f(n) * g(n) is O(g(n) 2 ) f(n) * g(n) is OM(f(n) 2 ) Examples : let f(n) = 2n g(n)= 4n 2 f(n) is O(g(n)) 2n + 4n 2 is O(n 2 ) 2n + 4n 2 is OM(n 2 ) 2n * 4n 2 is O(n 4 ) 2n * 4n 2 is OM(n 4 )
In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write
Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions.
More informationCSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Linda Shapiro Today Registration should be done. Homework 1 due 11:59 pm next Wednesday, January 14 Review math essential
More informationData Structures. Algorithm Performance and Big O Analysis
Data Structures Algorithm Performance and Big O Analysis What s an Algorithm? a clearly specified set of instructions to be followed to solve a problem. In essence: A computer program. In detail: Defined
More informationCSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015
1 Introduction These notes introduce basic runtime analysis of algorithms. We would like to be able to tell if a given algorithm is time-efficient, and to be able to compare different algorithms. 2 Linear
More informationSIMS 255 Foundations of Software Design. Complexity and NP-completeness
SIMS 255 Foundations of Software Design Complexity and NP-completeness Matt Welsh November 29, 2001 mdw@cs.berkeley.edu 1 Outline Complexity of algorithms Space and time complexity ``Big O'' notation Complexity
More informationClass Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction
Class Overview CSE 326: Data Structures Introduction Introduction to many of the basic data structures used in computer software Understand the data structures Analyze the algorithms that use them Know
More informationAnalysis of Computer Algorithms. Algorithm. Algorithm, Data Structure, Program
Analysis of Computer Algorithms Hiroaki Kobayashi Input Algorithm Output 12/13/02 Algorithm Theory 1 Algorithm, Data Structure, Program Algorithm Well-defined, a finite step-by-step computational procedure
More information3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
More informationCOMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012
Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about
More informationEfficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.
Algorithms Efficiency of algorithms Computational resources: time and space Best, worst and average case performance How to compare algorithms: machine-independent measure of efficiency Growth rate Complexity
More informationCSC148 Lecture 8. Algorithm Analysis Binary Search Sorting
CSC148 Lecture 8 Algorithm Analysis Binary Search Sorting Algorithm Analysis Recall definition of Big Oh: We say a function f(n) is O(g(n)) if there exists positive constants c,b such that f(n)
More informationDynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction
Lecture 11 Dynamic Programming 11.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach
More informationLinear functions Increasing Linear Functions. Decreasing Linear Functions
3.5 Increasing, Decreasing, Max, and Min So far we have been describing graphs using quantitative information. That s just a fancy way to say that we ve been using numbers. Specifically, we have described
More informationRecursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1
Recursion Slides by Christopher M Bourke Instructor: Berthe Y Choueiry Fall 007 Computer Science & Engineering 35 Introduction to Discrete Mathematics Sections 71-7 of Rosen cse35@cseunledu Recursive Algorithms
More informationCOLLEGE ALGEBRA. Paul Dawkins
COLLEGE ALGEBRA Paul Dawkins Table of Contents Preface... iii Outline... iv Preliminaries... Introduction... Integer Exponents... Rational Exponents... 9 Real Exponents...5 Radicals...6 Polynomials...5
More informationWhy? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
More informationLecture Notes on Linear Search
Lecture Notes on Linear Search 15-122: Principles of Imperative Computation Frank Pfenning Lecture 5 January 29, 2013 1 Introduction One of the fundamental and recurring problems in computer science is
More informationObjectives. Materials
Activity 4 Objectives Understand what a slope field represents in terms of Create a slope field for a given differential equation Materials TI-84 Plus / TI-83 Plus Graph paper Introduction One of the ways
More informationCHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
More information8 Primes and Modular Arithmetic
8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.
More informationChapter 9. Systems of Linear Equations
Chapter 9. Systems of Linear Equations 9.1. Solve Systems of Linear Equations by Graphing KYOTE Standards: CR 21; CA 13 In this section we discuss how to solve systems of two linear equations in two variables
More informationAPP INVENTOR. Test Review
APP INVENTOR Test Review Main Concepts App Inventor Lists Creating Random Numbers Variables Searching and Sorting Data Linear Search Binary Search Selection Sort Quick Sort Abstraction Modulus Division
More informationSession 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
More informationThe Running Time of Programs
CHAPTER 3 The Running Time of Programs In Chapter 2, we saw two radically different algorithms for sorting: selection sort and merge sort. There are, in fact, scores of algorithms for sorting. This situation
More informationLecture 2 Mathcad Basics
Operators Lecture 2 Mathcad Basics + Addition, - Subtraction, * Multiplication, / Division, ^ Power ( ) Specify evaluation order Order of Operations ( ) ^ highest level, first priority * / next priority
More informationLinear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
More informationData Structures and Algorithms
Data Structures and Algorithms Computational Complexity Escola Politècnica Superior d Alcoi Universitat Politècnica de València Contents Introduction Resources consumptions: spatial and temporal cost Costs
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationMath 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More information16. Recursion. COMP 110 Prasun Dewan 1. Developing a Recursive Solution
16. Recursion COMP 110 Prasun Dewan 1 Loops are one mechanism for making a program execute a statement a variable number of times. Recursion offers an alternative mechanism, considered by many to be more
More informationThe Taxman Game. Robert K. Moniot September 5, 2003
The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game
More informationLinear Programming Problems
Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number
More informationCreating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities
Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
More informationRevised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)
Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationReview of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
More informationAN ANALYSIS OF A WAR-LIKE CARD GAME. Introduction
AN ANALYSIS OF A WAR-LIKE CARD GAME BORIS ALEXEEV AND JACOB TSIMERMAN Abstract. In his book Mathematical Mind-Benders, Peter Winkler poses the following open problem, originally due to the first author:
More information2. Select Point B and rotate it by 15 degrees. A new Point B' appears. 3. Drag each of the three points in turn.
In this activity you will use Sketchpad s Iterate command (on the Transform menu) to produce a spiral design. You ll also learn how to use parameters, and how to create animation action buttons for parameters.
More informationSolving simultaneous equations using the inverse matrix
Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix
More informationYear 9 set 1 Mathematics notes, to accompany the 9H book.
Part 1: Year 9 set 1 Mathematics notes, to accompany the 9H book. equations 1. (p.1), 1.6 (p. 44), 4.6 (p.196) sequences 3. (p.115) Pupils use the Elmwood Press Essential Maths book by David Raymer (9H
More informationThe Method of Partial Fractions Math 121 Calculus II Spring 2015
Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method
More informationALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite
ALGEBRA Pupils should be taught to: Generate and describe sequences As outcomes, Year 7 pupils should, for example: Use, read and write, spelling correctly: sequence, term, nth term, consecutive, rule,
More informationSAT Math Facts & Formulas Review Quiz
Test your knowledge of SAT math facts, formulas, and vocabulary with the following quiz. Some questions are more challenging, just like a few of the questions that you ll encounter on the SAT; these questions
More informationThe Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,
The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints
More informationNear Optimal Solutions
Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.
More informationUnit 1 Number Sense. In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions.
Unit 1 Number Sense In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions. BLM Three Types of Percent Problems (p L-34) is a summary BLM for the material
More informationLinear and quadratic Taylor polynomials for functions of several variables.
ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is
More informationChapter 11 Number Theory
Chapter 11 Number Theory Number theory is one of the oldest branches of mathematics. For many years people who studied number theory delighted in its pure nature because there were few practical applications
More informationCS/COE 1501 http://cs.pitt.edu/~bill/1501/
CS/COE 1501 http://cs.pitt.edu/~bill/1501/ Lecture 01 Course Introduction Meta-notes These notes are intended for use by students in CS1501 at the University of Pittsburgh. They are provided free of charge
More informationNo Solution Equations Let s look at the following equation: 2 +3=2 +7
5.4 Solving Equations with Infinite or No Solutions So far we have looked at equations where there is exactly one solution. It is possible to have more than solution in other types of equations that are
More informationRandom Fibonacci-type Sequences in Online Gambling
Random Fibonacci-type Sequences in Online Gambling Adam Biello, CJ Cacciatore, Logan Thomas Department of Mathematics CSUMS Advisor: Alfa Heryudono Department of Mathematics University of Massachusetts
More informationFaster deterministic integer factorisation
David Harvey (joint work with Edgar Costa, NYU) University of New South Wales 25th October 2011 The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers
More informationPOLYNOMIAL FUNCTIONS
POLYNOMIAL FUNCTIONS Polynomial Division.. 314 The Rational Zero Test.....317 Descarte s Rule of Signs... 319 The Remainder Theorem.....31 Finding all Zeros of a Polynomial Function.......33 Writing a
More information8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes
Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us
More informationOverview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series
Sequences and Series Overview Number of instruction days: 4 6 (1 day = 53 minutes) Content to Be Learned Write arithmetic and geometric sequences both recursively and with an explicit formula, use them
More informationSession 6 Number Theory
Key Terms in This Session Session 6 Number Theory Previously Introduced counting numbers factor factor tree prime number New in This Session composite number greatest common factor least common multiple
More information0.8 Rational Expressions and Equations
96 Prerequisites 0.8 Rational Expressions and Equations We now turn our attention to rational expressions - that is, algebraic fractions - and equations which contain them. The reader is encouraged to
More informationHash Tables. Computer Science E-119 Harvard Extension School Fall 2012 David G. Sullivan, Ph.D. Data Dictionary Revisited
Hash Tables Computer Science E-119 Harvard Extension School Fall 2012 David G. Sullivan, Ph.D. Data Dictionary Revisited We ve considered several data structures that allow us to store and search for data
More informationComputer Science 210: Data Structures. Searching
Computer Science 210: Data Structures Searching Searching Given a sequence of elements, and a target element, find whether the target occurs in the sequence Variations: find first occurence; find all occurences
More informationSequential Data Structures
Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year
More informationComputer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li
Computer Algorithms NP-Complete Problems NP-completeness The quest for efficient algorithms is about finding clever ways to bypass the process of exhaustive search, using clues from the input in order
More information6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
More informationCHAPTER 18 Programming Your App to Make Decisions: Conditional Blocks
CHAPTER 18 Programming Your App to Make Decisions: Conditional Blocks Figure 18-1. Computers, even small ones like the phone in your pocket, are good at performing millions of operations in a single second.
More informationcsci 210: Data Structures Recursion
csci 210: Data Structures Recursion Summary Topics recursion overview simple examples Sierpinski gasket Hanoi towers Blob check READING: GT textbook chapter 3.5 Recursion In general, a method of defining
More informationCS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team
CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A
More informationAlgorithms. Margaret M. Fleck. 18 October 2010
Algorithms Margaret M. Fleck 18 October 2010 These notes cover how to analyze the running time of algorithms (sections 3.1, 3.3, 4.4, and 7.1 of Rosen). 1 Introduction The main reason for studying big-o
More informationSoftware Testing. Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program.
Software Testing Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program. Testing can only reveal the presence of errors and not the
More informationFormal Languages and Automata Theory - Regular Expressions and Finite Automata -
Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March
More informationLinear Programming Notes VII Sensitivity Analysis
Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization
More informationTaylor Polynomials. for each dollar that you invest, giving you an 11% profit.
Taylor Polynomials Question A broker offers you bonds at 90% of their face value. When you cash them in later at their full face value, what percentage profit will you make? Answer The answer is not 0%.
More informationChapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way
Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationIntroduction to Matrices
Introduction to Matrices Tom Davis tomrdavis@earthlinknet 1 Definitions A matrix (plural: matrices) is simply a rectangular array of things For now, we ll assume the things are numbers, but as you go on
More informationExamples of Functions
Examples of Functions In this document is provided examples of a variety of functions. The purpose is to convince the beginning student that functions are something quite different than polynomial equations.
More informationCS 141: Introduction to (Java) Programming: Exam 1 Jenny Orr Willamette University Fall 2013
Oct 4, 2013, p 1 Name: CS 141: Introduction to (Java) Programming: Exam 1 Jenny Orr Willamette University Fall 2013 1. (max 18) 4. (max 16) 2. (max 12) 5. (max 12) 3. (max 24) 6. (max 18) Total: (max 100)
More informationKenken For Teachers. Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 27, 2010. Abstract
Kenken For Teachers Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 7, 00 Abstract Kenken is a puzzle whose solution requires a combination of logic and simple arithmetic skills.
More informationWRITING PROOFS. Christopher Heil Georgia Institute of Technology
WRITING PROOFS Christopher Heil Georgia Institute of Technology A theorem is just a statement of fact A proof of the theorem is a logical explanation of why the theorem is true Many theorems have this
More informationPolynomial Invariants
Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
More informationDecision Making under Uncertainty
6.825 Techniques in Artificial Intelligence Decision Making under Uncertainty How to make one decision in the face of uncertainty Lecture 19 1 In the next two lectures, we ll look at the question of how
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationElements of a graph. Click on the links below to jump directly to the relevant section
Click on the links below to jump directly to the relevant section Elements of a graph Linear equations and their graphs What is slope? Slope and y-intercept in the equation of a line Comparing lines on
More informationCost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:
CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm
More informationis identically equal to x 2 +3x +2
Partial fractions 3.6 Introduction It is often helpful to break down a complicated algebraic fraction into a sum of simpler fractions. 4x+7 For example it can be shown that has the same value as 1 + 3
More informationPartial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:
Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than
More informationMultiplication and Division with Rational Numbers
Multiplication and Division with Rational Numbers Kitty Hawk, North Carolina, is famous for being the place where the first airplane flight took place. The brothers who flew these first flights grew up
More informationDynamic programming formulation
1.24 Lecture 14 Dynamic programming: Job scheduling Dynamic programming formulation To formulate a problem as a dynamic program: Sort by a criterion that will allow infeasible combinations to be eli minated
More informationS-Parameters and Related Quantities Sam Wetterlin 10/20/09
S-Parameters and Related Quantities Sam Wetterlin 10/20/09 Basic Concept of S-Parameters S-Parameters are a type of network parameter, based on the concept of scattering. The more familiar network parameters
More informationFunctions Recursion. C++ functions. Declare/prototype. Define. Call. int myfunction (int ); int myfunction (int x){ int y = x*x; return y; }
Functions Recursion C++ functions Declare/prototype int myfunction (int ); Define int myfunction (int x){ int y = x*x; return y; Call int a; a = myfunction (7); function call flow types type of function
More informationSection 4.1 Rules of Exponents
Section 4.1 Rules of Exponents THE MEANING OF THE EXPONENT The exponent is an abbreviation for repeated multiplication. The repeated number is called a factor. x n means n factors of x. The exponent tells
More information1 The Brownian bridge construction
The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof
More informationMBA Jump Start Program
MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Online Appendix: Basic Mathematical Concepts 2 1 The Number Spectrum Generally we depict numbers increasing from left to right
More informationAP Computer Science Java Mr. Clausen Program 9A, 9B
AP Computer Science Java Mr. Clausen Program 9A, 9B PROGRAM 9A I m_sort_of_searching (20 points now, 60 points when all parts are finished) The purpose of this project is to set up a program that will
More informationCPLEX Tutorial Handout
CPLEX Tutorial Handout What Is ILOG CPLEX? ILOG CPLEX is a tool for solving linear optimization problems, commonly referred to as Linear Programming (LP) problems, of the form: Maximize (or Minimize) c
More informationZeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
More informationBinary Number System. 16. Binary Numbers. Base 10 digits: 0 1 2 3 4 5 6 7 8 9. Base 2 digits: 0 1
Binary Number System 1 Base 10 digits: 0 1 2 3 4 5 6 7 8 9 Base 2 digits: 0 1 Recall that in base 10, the digits of a number are just coefficients of powers of the base (10): 417 = 4 * 10 2 + 1 * 10 1
More information