Problem Solving and Blind Search

Size: px
Start display at page:

Download "Problem Solving and Blind Search"

Transcription

1 Plan: Problem Solving Blind Search Breadth-First Priority-First Depth-First Depth-Limited Iterative Deepening Constraint Satisfaction Problem Solving and Blind Search We talked last time about various kinds of agents. Clearly, simple reflex agents, with and without state, are the sorts of things that you probably already know (or can figure out) how to implement. We will therefore begin our study of AI techniques by focusing on how to construct goal-based agents that perform problem solving. Problem-Solving Agents Consider a general paradigm, called problem solving, for implementing goal-based agent programs. Figuring out what to do to achieve a goal would be easy if all effects were the result of executing a single action. But often a sequence of actions is required to achieve a goal. As another example, getting from A(rad) to B(ucharest) via a sequence of local city-to-city transitions (or Ann Arbor to Bowling Green?). Impossible to get there in one transition, but can find a sequence of transitions that works. Show the 15-puzzle as another example. We ll mostly use the smaller (3x3) version called the 8-puzzle, for illustration. Or consider the ToH problem. Can t do it in just one step (or, at least, not just one legal step). How many steps will it take to solve this? How do we know? How would an agent know? In the route-planning example, it is obvious that the effect of an action is to move the agent from one place to another. It turns out that this is analogous to actions in general, which can be modeled as

2 moving the agent from one situation or state to another. The state may include the physical location of the agent, but may also specify any other feature of the world that is relevant to the agent. How would we specify a state in the 8-puzzle? What do we need to represent? What data structure(s) might we use to represent it? In ToH example, what is relevant in a state? What is not? What data structure might we use? Even ignoring everything but the most relevant stuff - that is, by abstracting the problem, we still might have quite a few states. How many states are there in the ToH problem? Do we have to worry about the order in which disks are on pegs? Why not? What are the available actions to an agent in this problem? How many states are there in the 8-puzzle? We can also define what it means more precisely to solve a problem. A problem consists of: an initial state, a set of available actions, or operators, defined in terms of a successor function, S:state operator state, specifying the state resulting from executing the action in any particular state, the goal test, defining the subset of states that satisfy the goal, and a path cost function g specifying the cost of executing a sequence of operators, or a path. A solution to a problem is a sequence of operators, 8 o 1,l,o n, such that the state 88 S(S(m(S(x init,o 1 ),o 2 )m), o n ) satisfies the goal test. The solution is optimal if it has minimum path cost among all solutions. Given the data structure that we decided on for representing a state in the 8-puzzle, for example, we can now be more precise about the problem description. The initial state is something that would be given to us, and that we would encode in the array. <Give an example.>

3 The goal test, in this case, is very specific about what state needs to be reached to satisfy the goal. So the goal test would involve comparing a potential goal state array with an array that contains the goal state, and if the arrays are identical then the goal has been reached. The operators involve sliding tiles into the open space; or equivalently, moving the space to one of its adjacent locations. Thus, we could have an operator for moving the space-right, in pseudocode: Operator: space-right(state-array) Xspace,Yspace :- Find-space(state-array) If (Xspace > 1) then return(state-array) Else {new-state-array :- copy(state-array); new-state-array(xspace,yspace) :- new-state-array(xspace+1,yspace); new-state-array(xspace+1, Yspace) :- S; return(new-state-array)} The other operators would be defined similarly. Let s look at another problem - the milk jug problem. Let s try doing some formulation. <Show them the problem.> How do we formulate the goal? How will we specify a state? Is the space continuous or discrete? How large will the state space be? (20) How many of these states will satisfy the goal? (4) What is a solution? How long a sequence would be too long to expect as a solution? ToH. What are the goal state(s)? What would you expect as the path to the goal, in terms of length. <For the 8-disk case, there are 3**8 states. The path length is (2**d)-1, so (2**8)-1. Thus, the proportion of states visited is less than 4% (255/6561). It would be good to avoid exhaustive search, don t you think?

4 Search in State Spaces In general, how can we solve a problem? Some problems may be amenable to particular procedures that result in solutions. But a general approach that can be applied to any problem described in this manner is called search. By search we mean any systematic way of traversing the graph of states in order to find a sequence of operators leading to a goal state. Starting from an initial state, the set of all paths beginning at that state form a tree, which we will call the search tree. A search strategy is a method of generating that tree dictating in which order the paths are elaborated. We elaborate paths by expanding a node, generating all of its possible successors. (Note: I cannot emphasize strongly enough this view of generating a tree during the search. Lots of people have in mind the notion of searching a tree, as if the tree is all known ahead of time. Would you want to generate the whole tree for the ToH, and then search it? I think not!) To implement a search strategy, we will need to keep track at each node of the search tree: the corresponding state in the state space, the node s predecessor in the path, or parent, and the operator applied to the parent to generate this node. It will also often be convenient to keep track of the depth of the node, and the path cost from the root to the node. Comparing Search Strategies We can evaluate particular search strategies on the following criteria: completeness: whether guaranteed to find a solution if one exists time efficiency (complexity) space efficiency optimality, or more generally, solution quality

5 Breadth-first Search In BFS, our search strategy will be to expand nodes in the order they are generated. To implement this, as we generate new successor nodes, we simply enqueue the nodes at the end of the list of nodes marking the frontier of the search tree, so they will be removed from the queue in a first-in first-out (FIFO) manner. This corresponds to a level-order traversal of the search tree all nodes at depth d are expanded before proceeding to depth d +1. Illustrate with the 8-puzzle. Let us examine BFS wrt our evaluation criteria. First, it is complete. If there is a solution, BFS will eventually find it. Moreover, it will find the shallowest solution there is. So if path cost corresponds to solution depth, BFS also produces optimal solutions. Now let us consider efficiency, first time complexity. To simplify the analysis we will assume a uniform search space, one where each node has the same number of successors. This is satisfied in some problems, but is not quite true for some of our examples, such as ToH or the jug problem. The number of successors for each node is called the branching factor, and denoted by b. Suppose the shallowest solution is found at depth d. Since BFS will have generated the complete search trees at depths < d, the number of nodes in the search tree is: d 1 b i 66 + x = 1 + b + b 2 + h + b d 1 + x, where 1 x b d i=0. The actual value of x depends on where at depth d the solution is found (draw triangle picture). Regardless, both the worst-case and average-case complexity is O(b d ). (Explain this, as well as the expression above.) This is exponential, and so obviously can get out of hand quickly with d. This may be depressing, but get used to it exponential complexity is the usual situation for search algorithms. <Think about this with the ToH problem. Let s say conservatively that b is 2. We already know that d is 255 for the 8-disk version.

6 Care to do the mathematics? (5.7 e 76)> or 1.7e38 + x. You betcha we don t want to repeat states! Perhaps a more serious concern is the space complexity. This is the same as the time complexity, because we have to store the entire fringe, which consists of all the nodes at the deepest level. This is also O(b d ). In typical computer configurations, we will run out of space long before we run out of time. A slight variant or generalization of BFS is uniform-cost search. In uniform-cost search, we expand the node with the minimum path cost, rather than that with the minimum depth. This requires a priority queue as opposed to a FIFO queue (remember priority-first search?). Uniform-cost has the same completeness and complexity properties (modulo the queuing operations) as BFS, but is optimal in somewhat more general circumstances. Namely, it produces the optimal solution as long as path cost never decreases as we go down a path. The PFS shortest-path algorithm (aka Dijkstra s algorithm) is an instance of uniform-cost search. Different search strategies are defined as versions of PFS with different sorting functions. Depth-First Search The second canonical search method is depth-first search (DFS). In contrast with BFS, DFS expands the most recently expanded node first. Equivalently, it expands the deepest node on the fringe. To implement DFS, we simply replace the FIFO queuing discipline with LIFO. That is, the queue is a stack. (Sometimes this is implemented recursively using the function calling stack, rather than an explicit data structure for fringe nodes.) Is DFS complete? No charging down a path in depth-first manner might lead to infinite expansions in that direction, when a solution was possibly close-at-hand along another direction. One way of reducing the chances of infinite movement along one path is to recognize states that you ve already expanded and not expanding them again. Talk about avoiding redundancy:

7 Avoid two-step cycles (don't reverse previous step) Avoid n-step cycles (don't have same state twice in path) Avoid redoing anything: closed list. Its cost? Even with redundancy checking, still can wander down infinite path if the state space itself is infinite. Example: initial state is 0. Operators are increment-by-1 and decrement-by-1. Goal-test is "< -3." What happens if first operator applied is always increment-by-1. In cases where we can bound the maximum depth that we need to search, say at m, however, DFS is complete. That is, if we can be sure that at least one solution lies at depth m or less. Its worst-case time complexity is O(b m ), as is the average case (but a little less obviously). This is basically the same as BFS. Moreover, DFS is not optimal. Consider route-finding example where we make an unlucky early choice. So why would we ever consider DFS? The answer lies in its space complexity. In DFS, the fringe requires at most b nodes per level, since all we need is the current node we have expanded and the other nodes to try if this fails (draw picture). Therefore the overall space complexity is O(bm), which is practically never a problem (unless the paths we are exploring are too long anyway space required is proportional to solution size). Variant: depth-limited search. Simply impose a limit l on the depth of nodes to be expanded. Time complexity O(b l ), space complexity O(bl), complete iff l m (where, again, m is a depth at which a solution exists). Iterative Deepening What we would really like to do is find a method that combines the advantages of BFS and DFS: getting the completeness and optimality of the former and the space efficiency of the latter. There is a technique that does this, called iterative deepening.

8 The idea of iterative deepening is very simple. We start by running DFS with a depth limit of 1. If that finds a solution (fat chance), we are done. Otherwise, we run DFS again with a depth limit of 2. And if that fails we start again with a depth limit of 3, and so on. Here s the pseudo-code: function ITERATIVE-DEEPENING-SEARCH(Problem) loop for depth from 0 to infinity if DEPTH-LIMITED-SEARCH(Problem,depth) succeeds then return its result end loop return failure This is complete, because if there is a solution at depth d, the algorithm will eventually find it when it runs DEPTH-LIMITED- SEARCH with argument d (and all of the previous iterations are guaranteed to terminate). It is optimal in the same sense as BFS, because it never looks at a path of length d +1 before it looks at all paths of length d. In fact, it generates solutions in the same order as BFS. Now what about complexity? At first glance, iterative deepening looks extremely inefficient, because there is a great deal of redundant processing. When running DEPTH-LIMITED-SEARCH with argument l, it repeats all the work it did at iteration l 1, which in turn repeats all the work it did on l 2, and so on. But on further examination, we see that this redundancy is not really significant. To see this, we can express the complexity of the algorithm as the sum of the complexity of its iterations, d 88 O(b l ) = O(b) + O(b 2 )+m+o(b d 1 ) + O(b d ) = O(b d ). l=1 So the asymptotic complexity is still O(b d ). In fact, the ratio of time required by DFID compared to DFS is given by (after some algebra) (b + 1) ( b 1), which is maximized at 3 for b = 2. For higher branching factors (e.g., b = 10), the redundancy overhead is virtually negligible.

9 So the time requirements are the same as our basic methods, but the space requirements are just those of DFS. This is the best we can hope to do in the worst case we may have to look at the entire search space O(b d ), and the size of the solution itself is O(d). Iterative deepening is invariably the way to go when there is a large search space and we don t know the depth of the solution. An unusually definitive recommendation! Bidirectional Search How do you solve a maze? Hope is to work from ends toward the middle. Why is this good? If only go half way from each end, then instead of O(b**d) we have two searches of O(b**(d/2)) which is a lot less! What would the space complexity be? O(b**(d/2)) also! But to work we need to: Know the inverse operators to work backwards. Need to start backward from a goal state rather than goal test. Need to discover whether new node matches node found in other search tree - matching can be costly! Need to do a decent search in each half. Constraint Satisfaction We have so far talked about methods applicable to all problems. There is a particular subclass of problems that is still very broad, but yet has some additional structural properties that are useful to exploit in search. Members of this class are called constraintsatisfaction problems, or CSPs. CSPs represent a practically important special class, and indeed are the subject of much active research in AI and OR. Recall that a problem is defined by sets of states, operators, a goal test, and a path cost function. In a CSP, these are: states: operators: assignments to a set of variables assigning a value to a variable

10 goal test: path cost: solutions) constraints on joint variable assignments (usually ignored, but may specify preferred Example: cryptarithmetic (a toy problem). SEND+MORE=MONEY Variables are letters, which can take on values in the set {0,,9} (the domain). Constraints are that the columns have to add up right, including carries (and perhaps that all characters stand for a different digit). Initial state is the empty assignment. Goal states are those that assign a value to every variable and where all constraints are satisfied. We can of course apply our general search procedure to this problem, and to all CSPs. A very naive approach would be to start from the initial state (no assignments), and consider all assignments that we could make. For cryptarithmetic, this would correspond to a branching factor of 10x, where x is the number of characters in the problem. This is obviously wasteful, since it will generate separate paths that differ only in which order variables are assigned. While operator order matters for problems in general, they do not for CSPs. So almost all CSP algorithms generate successors for only a single variable assignment at a given search-tree node. This leads to a search space of size b d, where d is the number of variables, and b is the maximum domain size. For cryptarithmetic, this is 10x. Still pretty bad, of course, but better than (10x)x. We cannot hope to do significantly better in the worst case, because many NP-complete problems are CSPs. There is another major fact we can take advantage of in CSPs. Namely, in solving a CSP we can often tell that a state can never lead to the goal, even before all variables are assigned. This is because the constraints may already be violated. It would be wasteful to expand such a state, so instead we prune the node, and avoid even looking at its entire subtree. This can dramatically reduce the search space (but of course it is still exponential in the worst case). This approach is called backtracking after generating each node we check it for constraint consistency, and if it fails we backtrack to some other

11 node. Often we can do even better than to check each node for immediate constraint violations. For example, we might be able to tell that a state can never lead to a goal state, even though none of the current assignments directly violate the specified constraints. For example, consider the state that assigns D=2, E=3, and N=5. This does not directly violate any of the column or uniqueness constraints. But we can see that this will ultimately lead to an inconsistency, because any state with D=2 and E=3 must have Y=5, and so that value cannot be assigned to N. But if we fail to recognize this, we can waste a lot of effort in expanding this state to assign combinations of values for M,R,S,, only to notice the inconsistency when we get to Y. We can avoid this problem by assigning Y=5 as soon as we assign D=2 and E=3. Such an approach is called constraint propagation. (Illustrate more extensively, if there's time).

Measuring the Performance of an Agent

Measuring the Performance of an Agent 25 Measuring the Performance of an Agent The rational agent that we are aiming at should be successful in the task it is performing To assess the success we need to have a performance measure What is rational

More information

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig AI: A Modern Approach, Chpts. 3-4 Russell and Norvig Sequential Decision Making in Robotics CS 599 Geoffrey Hollinger and Gaurav Sukhatme (Some slide content from Stuart Russell and HweeTou Ng) Spring,

More information

Sequential Data Structures

Sequential Data Structures Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year

More information

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

More information

Chapter 11 Number Theory

Chapter 11 Number Theory Chapter 11 Number Theory Number theory is one of the oldest branches of mathematics. For many years people who studied number theory delighted in its pure nature because there were few practical applications

More information

Binary Search Trees CMPSC 122

Binary Search Trees CMPSC 122 Binary Search Trees CMPSC 122 Note: This notes packet has significant overlap with the first set of trees notes I do in CMPSC 360, but goes into much greater depth on turning BSTs into pseudocode than

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Information, Entropy, and Coding

Information, Entropy, and Coding Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated

More information

Mathematical Induction

Mathematical Induction Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

More information

Algorithms and Data S tructures Structures Stack, Queues, and Applications Applications Ulf Leser

Algorithms and Data S tructures Structures Stack, Queues, and Applications Applications Ulf Leser Algorithms and Data Structures Stack, Queues, and Applications Ulf Leser Content of this Lecture Stacks and Queues Tree Traversal Towers of Hanoi Ulf Leser: Alg&DS, Summer semester 2011 2 Stacks and Queues

More information

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012 Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about

More information

The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game. Robert K. Moniot September 5, 2003 The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

More information

Lecture Notes on Linear Search

Lecture Notes on Linear Search Lecture Notes on Linear Search 15-122: Principles of Imperative Computation Frank Pfenning Lecture 5 January 29, 2013 1 Introduction One of the fundamental and recurring problems in computer science is

More information

Y. Xiang, Constraint Satisfaction Problems

Y. Xiang, Constraint Satisfaction Problems Constraint Satisfaction Problems Objectives Constraint satisfaction problems Backtracking Iterative improvement Constraint propagation Reference Russell & Norvig: Chapter 5. 1 Constraints Constraints are

More information

Problem solving and search

Problem solving and search Problem solving and search hapter 3 hapter 3 1 eminders ssignment 0 due 5pm today ssignment 1 posted, due 2/9 ection 105 will move to 9-10am starting next week hapter 3 2 Outline Problem-solving agents

More information

Common Data Structures

Common Data Structures Data Structures 1 Common Data Structures Arrays (single and multiple dimensional) Linked Lists Stacks Queues Trees Graphs You should already be familiar with arrays, so they will not be discussed. Trees

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Notes on Complexity Theory Last updated: August, 2011. Lecture 1

Notes on Complexity Theory Last updated: August, 2011. Lecture 1 Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look

More information

Symbol Tables. Introduction

Symbol Tables. Introduction Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

More information

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

More information

The Basics of Graphical Models

The Basics of Graphical Models The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

More information

A Non-Linear Schema Theorem for Genetic Algorithms

A Non-Linear Schema Theorem for Genetic Algorithms A Non-Linear Schema Theorem for Genetic Algorithms William A Greene Computer Science Department University of New Orleans New Orleans, LA 70148 bill@csunoedu 504-280-6755 Abstract We generalize Holland

More information

Mathematical Induction

Mathematical Induction Mathematical Induction In logic, we often want to prove that every member of an infinite set has some feature. E.g., we would like to show: N 1 : is a number 1 : has the feature Φ ( x)(n 1 x! 1 x) How

More information

Dynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction

Dynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction Lecture 11 Dynamic Programming 11.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

5544 = 2 2772 = 2 2 1386 = 2 2 2 693. Now we have to find a divisor of 693. We can try 3, and 693 = 3 231,and we keep dividing by 3 to get: 1

5544 = 2 2772 = 2 2 1386 = 2 2 2 693. Now we have to find a divisor of 693. We can try 3, and 693 = 3 231,and we keep dividing by 3 to get: 1 MATH 13150: Freshman Seminar Unit 8 1. Prime numbers 1.1. Primes. A number bigger than 1 is called prime if its only divisors are 1 and itself. For example, 3 is prime because the only numbers dividing

More information

Data Structure [Question Bank]

Data Structure [Question Bank] Unit I (Analysis of Algorithms) 1. What are algorithms and how they are useful? 2. Describe the factor on best algorithms depends on? 3. Differentiate: Correct & Incorrect Algorithms? 4. Write short note:

More information

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions.

More information

(Refer Slide Time: 2:03)

(Refer Slide Time: 2:03) Control Engineering Prof. Madan Gopal Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 11 Models of Industrial Control Devices and Systems (Contd.) Last time we were

More information

Lecture Notes on Binary Search Trees

Lecture Notes on Binary Search Trees Lecture Notes on Binary Search Trees 15-122: Principles of Imperative Computation Frank Pfenning Lecture 17 March 17, 2010 1 Introduction In the previous two lectures we have seen how to exploit the structure

More information

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)! The Tower of Hanoi Recursion Solution recursion recursion recursion Recursive Thinking: ignore everything but the bottom disk. 1 2 Recursive Function Time Complexity Hanoi (n, src, dest, temp): If (n >

More information

6.852: Distributed Algorithms Fall, 2009. Class 2

6.852: Distributed Algorithms Fall, 2009. Class 2 .8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparison-based algorithms. Basic computation in general synchronous networks: Leader election

More information

Patterns in Pascal s Triangle

Patterns in Pascal s Triangle Pascal s Triangle Pascal s Triangle is an infinite triangular array of numbers beginning with a at the top. Pascal s Triangle can be constructed starting with just the on the top by following one easy

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Algorithms. Margaret M. Fleck. 18 October 2010

Algorithms. Margaret M. Fleck. 18 October 2010 Algorithms Margaret M. Fleck 18 October 2010 These notes cover how to analyze the running time of algorithms (sections 3.1, 3.3, 4.4, and 7.1 of Rosen). 1 Introduction The main reason for studying big-o

More information

Binary Trees and Huffman Encoding Binary Search Trees

Binary Trees and Huffman Encoding Binary Search Trees Binary Trees and Huffman Encoding Binary Search Trees Computer Science E119 Harvard Extension School Fall 2012 David G. Sullivan, Ph.D. Motivation: Maintaining a Sorted Collection of Data A data dictionary

More information

Testing LTL Formula Translation into Büchi Automata

Testing LTL Formula Translation into Büchi Automata Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland

More information

Sudoku puzzles and how to solve them

Sudoku puzzles and how to solve them Sudoku puzzles and how to solve them Andries E. Brouwer 2006-05-31 1 Sudoku Figure 1: Two puzzles the second one is difficult A Sudoku puzzle (of classical type ) consists of a 9-by-9 matrix partitioned

More information

Cosmological Arguments for the Existence of God S. Clarke

Cosmological Arguments for the Existence of God S. Clarke Cosmological Arguments for the Existence of God S. Clarke [Modified Fall 2009] 1. Large class of arguments. Sometimes they get very complex, as in Clarke s argument, but the basic idea is simple. Lets

More information

The fundamental question in economics is 2. Consumer Preferences

The fundamental question in economics is 2. Consumer Preferences A Theory of Consumer Behavior Preliminaries 1. Introduction The fundamental question in economics is 2. Consumer Preferences Given limited resources, how are goods and service allocated? 1 3. Indifference

More information

Kenken For Teachers. Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 27, 2010. Abstract

Kenken For Teachers. Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 27, 2010. Abstract Kenken For Teachers Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 7, 00 Abstract Kenken is a puzzle whose solution requires a combination of logic and simple arithmetic skills.

More information

Search methods motivation 1

Search methods motivation 1 Suppose you are an independent software developer, and your software package Windows Defeater R, widely available on sourceforge under a GNU GPL license, is getting an international attention and acclaim.

More information

Triangulation by Ear Clipping

Triangulation by Ear Clipping Triangulation by Ear Clipping David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: November 18, 2002 Last Modified: August 16, 2015 Contents

More information

Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb Binary Search Trees Lecturer: Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 27 1 Properties of Binary Search Trees 2 Basic BST operations The worst-case time complexity of BST operations

More information

CMPSCI611: Approximating MAX-CUT Lecture 20

CMPSCI611: Approximating MAX-CUT Lecture 20 CMPSCI611: Approximating MAX-CUT Lecture 20 For the next two lectures we ll be seeing examples of approximation algorithms for interesting NP-hard problems. Today we consider MAX-CUT, which we proved to

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Introduction to Data Structures and Algorithms

Introduction to Data Structures and Algorithms Introduction to Data Structures and Algorithms Chapter: Elementary Data Structures(1) Lehrstuhl Informatik 7 (Prof. Dr.-Ing. Reinhard German) Martensstraße 3, 91058 Erlangen Overview on simple data structures

More information

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Why? A central concept in Computer Science. Algorithms are ubiquitous. Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

More information

Smart Graphics: Methoden 3 Suche, Constraints

Smart Graphics: Methoden 3 Suche, Constraints Smart Graphics: Methoden 3 Suche, Constraints Vorlesung Smart Graphics LMU München Medieninformatik Butz/Boring Smart Graphics SS2007 Methoden: Suche 2 Folie 1 Themen heute Suchverfahren Hillclimbing Simulated

More information

Dynamic Programming Problem Set Partial Solution CMPSC 465

Dynamic Programming Problem Set Partial Solution CMPSC 465 Dynamic Programming Problem Set Partial Solution CMPSC 465 I ve annotated this document with partial solutions to problems written more like a test solution. (I remind you again, though, that a formal

More information

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2) Sorting revisited How did we use a binary search tree to sort an array of elements? Tree Sort Algorithm Given: An array of elements to sort 1. Build a binary search tree out of the elements 2. Traverse

More information

6.1. The Exponential Function. Introduction. Prerequisites. Learning Outcomes. Learning Style

6.1. The Exponential Function. Introduction. Prerequisites. Learning Outcomes. Learning Style The Exponential Function 6.1 Introduction In this block we revisit the use of exponents. We consider how the expression a x is defined when a is a positive number and x is irrational. Previously we have

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:

More information

Decision Making under Uncertainty

Decision Making under Uncertainty 6.825 Techniques in Artificial Intelligence Decision Making under Uncertainty How to make one decision in the face of uncertainty Lecture 19 1 In the next two lectures, we ll look at the question of how

More information

[Refer Slide Time: 05:10]

[Refer Slide Time: 05:10] Principles of Programming Languages Prof: S. Arun Kumar Department of Computer Science and Engineering Indian Institute of Technology Delhi Lecture no 7 Lecture Title: Syntactic Classes Welcome to lecture

More information

Lecture Notes on Binary Search Trees

Lecture Notes on Binary Search Trees Lecture Notes on Binary Search Trees 15-122: Principles of Imperative Computation Frank Pfenning André Platzer Lecture 17 October 23, 2014 1 Introduction In this lecture, we will continue considering associative

More information

CSE 135: Introduction to Theory of Computation Decidability and Recognizability

CSE 135: Introduction to Theory of Computation Decidability and Recognizability CSE 135: Introduction to Theory of Computation Decidability and Recognizability Sungjin Im University of California, Merced 04-28, 30-2014 High-Level Descriptions of Computation Instead of giving a Turing

More information

Introduction Solvability Rules Computer Solution Implementation. Connect Four. March 9, 2010. Connect Four

Introduction Solvability Rules Computer Solution Implementation. Connect Four. March 9, 2010. Connect Four March 9, 2010 is a tic-tac-toe like game in which two players drop discs into a 7x6 board. The first player to get four in a row (either vertically, horizontally, or diagonally) wins. The game was first

More information

New Admissible Heuristics for Domain-Independent Planning

New Admissible Heuristics for Domain-Independent Planning New Admissible Heuristics for Domain-Independent Planning Patrik Haslum Dept. of Computer Science Linköpings Universitet Linköping, Sweden pahas@ida.liu.se Blai Bonet Depto. de Computación Universidad

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The

More information

Heaps & Priority Queues in the C++ STL 2-3 Trees

Heaps & Priority Queues in the C++ STL 2-3 Trees Heaps & Priority Queues in the C++ STL 2-3 Trees CS 3 Data Structures and Algorithms Lecture Slides Friday, April 7, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks

More information

Data Structures and Algorithms Written Examination

Data Structures and Algorithms Written Examination Data Structures and Algorithms Written Examination 22 February 2013 FIRST NAME STUDENT NUMBER LAST NAME SIGNATURE Instructions for students: Write First Name, Last Name, Student Number and Signature where

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Graph Theory Problems and Solutions

Graph Theory Problems and Solutions raph Theory Problems and Solutions Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is

More information

3 Abstract Data Types and Search

3 Abstract Data Types and Search 3 Abstract Data Types and Search Chapter Objectives Chapter Contents Prolog s graph search representations were described and built: Lists A recursive tree-walk algorithm The cut predicate,!, for Prolog

More information

A Model-driven Approach to Predictive Non Functional Analysis of Component-based Systems

A Model-driven Approach to Predictive Non Functional Analysis of Component-based Systems A Model-driven Approach to Predictive Non Functional Analysis of Component-based Systems Vincenzo Grassi Università di Roma Tor Vergata, Italy Raffaela Mirandola {vgrassi, mirandola}@info.uniroma2.it Abstract.

More information

From Last Time: Remove (Delete) Operation

From Last Time: Remove (Delete) Operation CSE 32 Lecture : More on Search Trees Today s Topics: Lazy Operations Run Time Analysis of Binary Search Tree Operations Balanced Search Trees AVL Trees and Rotations Covered in Chapter of the text From

More information

Coverability for Parallel Programs

Coverability for Parallel Programs 2015 http://excel.fit.vutbr.cz Coverability for Parallel Programs Lenka Turoňová* Abstract We improve existing method for the automatic verification of systems with parallel running processes. The technique

More information

14.1 Rent-or-buy problem

14.1 Rent-or-buy problem CS787: Advanced Algorithms Lecture 14: Online algorithms We now shift focus to a different kind of algorithmic problem where we need to perform some optimization without knowing the input in advance. Algorithms

More information

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1 Recursion Slides by Christopher M Bourke Instructor: Berthe Y Choueiry Fall 007 Computer Science & Engineering 35 Introduction to Discrete Mathematics Sections 71-7 of Rosen cse35@cseunledu Recursive Algorithms

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Basic Electronics Prof. Dr. Chitralekha Mahanta Department of Electronics and Communication Engineering Indian Institute of Technology, Guwahati

Basic Electronics Prof. Dr. Chitralekha Mahanta Department of Electronics and Communication Engineering Indian Institute of Technology, Guwahati Basic Electronics Prof. Dr. Chitralekha Mahanta Department of Electronics and Communication Engineering Indian Institute of Technology, Guwahati Module: 2 Bipolar Junction Transistors Lecture-2 Transistor

More information

An Introduction to The A* Algorithm

An Introduction to The A* Algorithm An Introduction to The A* Algorithm Introduction The A* (A-Star) algorithm depicts one of the most popular AI methods used to identify the shortest path between 2 locations in a mapped area. The A* algorithm

More information

Home Page. Data Structures. Title Page. Page 1 of 24. Go Back. Full Screen. Close. Quit

Home Page. Data Structures. Title Page. Page 1 of 24. Go Back. Full Screen. Close. Quit Data Structures Page 1 of 24 A.1. Arrays (Vectors) n-element vector start address + ielementsize 0 +1 +2 +3 +4... +n-1 start address continuous memory block static, if size is known at compile time dynamic,

More information

Session 6 Number Theory

Session 6 Number Theory Key Terms in This Session Session 6 Number Theory Previously Introduced counting numbers factor factor tree prime number New in This Session composite number greatest common factor least common multiple

More information

Regular Expressions and Automata using Haskell

Regular Expressions and Automata using Haskell Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions

More information

Analysis of Micromouse Maze Solving Algorithms

Analysis of Micromouse Maze Solving Algorithms 1 Analysis of Micromouse Maze Solving Algorithms David M. Willardson ECE 557: Learning from Data, Spring 2001 Abstract This project involves a simulation of a mouse that is to find its way through a maze.

More information

23. RATIONAL EXPONENTS

23. RATIONAL EXPONENTS 23. RATIONAL EXPONENTS renaming radicals rational numbers writing radicals with rational exponents When serious work needs to be done with radicals, they are usually changed to a name that uses exponents,

More information

Questions 1 through 25 are worth 2 points each. Choose one best answer for each.

Questions 1 through 25 are worth 2 points each. Choose one best answer for each. Questions 1 through 25 are worth 2 points each. Choose one best answer for each. 1. For the singly linked list implementation of the queue, where are the enqueues and dequeues performed? c a. Enqueue in

More information

Enumerating possible Sudoku grids

Enumerating possible Sudoku grids Enumerating possible Sudoku grids Bertram Felgenhauer Department of Computer Science TU Dresden 00 Dresden Germany bf@mail.inf.tu-dresden.de Frazer Jarvis Department of Pure Mathematics University of Sheffield,

More information

Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R

Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R Binary Search Trees A Generic Tree Nodes in a binary search tree ( B-S-T) are of the form P parent Key A Satellite data L R B C D E F G H I J The B-S-T has a root node which is the only node whose parent

More information

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* Rakesh Nagi Department of Industrial Engineering University at Buffalo (SUNY) *Lecture notes from Network Flows by Ahuja, Magnanti

More information

Load Balancing. Load Balancing 1 / 24

Load Balancing. Load Balancing 1 / 24 Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait

More information

Analysis of Algorithms I: Optimal Binary Search Trees

Analysis of Algorithms I: Optimal Binary Search Trees Analysis of Algorithms I: Optimal Binary Search Trees Xi Chen Columbia University Given a set of n keys K = {k 1,..., k n } in sorted order: k 1 < k 2 < < k n we wish to build an optimal binary search

More information

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8] Code No: R05220502 Set No. 1 1. (a) Describe the performance analysis in detail. (b) Show that f 1 (n)+f 2 (n) = 0(max(g 1 (n), g 2 (n)) where f 1 (n) = 0(g 1 (n)) and f 2 (n) = 0(g 2 (n)). [8+8] 2. (a)

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction Class Overview CSE 326: Data Structures Introduction Introduction to many of the basic data structures used in computer software Understand the data structures Analyze the algorithms that use them Know

More information

Chapter 13: Binary and Mixed-Integer Programming

Chapter 13: Binary and Mixed-Integer Programming Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

More information

Regular Languages and Finite Automata

Regular Languages and Finite Automata Regular Languages and Finite Automata 1 Introduction Hing Leung Department of Computer Science New Mexico State University Sep 16, 2010 In 1943, McCulloch and Pitts [4] published a pioneering work on a

More information

Class notes Program Analysis course given by Prof. Mooly Sagiv Computer Science Department, Tel Aviv University second lecture 8/3/2007

Class notes Program Analysis course given by Prof. Mooly Sagiv Computer Science Department, Tel Aviv University second lecture 8/3/2007 Constant Propagation Class notes Program Analysis course given by Prof. Mooly Sagiv Computer Science Department, Tel Aviv University second lecture 8/3/2007 Osnat Minz and Mati Shomrat Introduction This

More information

Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE

Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE Dynamic Programming Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems; its essential characteristic is the multistage nature of the optimization

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Computational Complexity Escola Politècnica Superior d Alcoi Universitat Politècnica de València Contents Introduction Resources consumptions: spatial and temporal cost Costs

More information

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation: CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

More information

Regions in a circle. 7 points 57 regions

Regions in a circle. 7 points 57 regions Regions in a circle 1 point 1 region points regions 3 points 4 regions 4 points 8 regions 5 points 16 regions The question is, what is the next picture? How many regions will 6 points give? There's an

More information

Course: Programming II - Abstract Data Types. The ADT Stack. A stack. The ADT Stack and Recursion Slide Number 1

Course: Programming II - Abstract Data Types. The ADT Stack. A stack. The ADT Stack and Recursion Slide Number 1 Definition Course: Programming II - Abstract Data Types The ADT Stack The ADT Stack is a linear sequence of an arbitrary number of items, together with access procedures. The access procedures permit insertions

More information

TUM INSTITUT FÜR INFORMATIK. Matching and Evaluation of Disjunctive Predicates for Data Stream Sharing TECHNISCHE UNIVERSITÄT MÜNCHEN

TUM INSTITUT FÜR INFORMATIK. Matching and Evaluation of Disjunctive Predicates for Data Stream Sharing TECHNISCHE UNIVERSITÄT MÜNCHEN TUM INSTITUT FÜR INFORMATIK Matching and Evaluation of Disjunctive Predicates for Data Stream Sharing Richard Kuntschke Alfons Kemper ÀÁÂ ÃÄÅÆÇ TUM-I0615 August 06 TECHNISCHE UNIVERSITÄT MÜNCHEN TUM-INFO-08-I0615-100/1.-FI

More information

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D. 1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D. base address 2. The memory address of fifth element of an array can be calculated

More information