Outline. Informed search algorithms. Greedy best-first search. Best-first search. Greedy best-first search example. Greedy best-first search example

Similar documents
Informed search algorithms. Chapter 4, Sections 1 2 1

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Smart Graphics: Methoden 3 Suche, Constraints

CS MidTerm Exam 4/1/2004 Name: KEY. Page Max Score Total 139

An Introduction to The A* Algorithm

Measuring the Performance of an Agent

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Search methods motivation 1

Approximation Algorithms

Incremental Routing Algorithms For Dynamic Transportation Networks

Guessing Game: NP-Complete?

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

SEARCHING AND KNOWLEDGE REPRESENTATION. Angel Garrido

A Systematic Approach to Model-Guided Empirical Search for Memory Hierarchy Optimization

Chapter 6. Decoding. Statistical Machine Translation

Moving Target Search. 204 Automated Reasoning

Problem solving and search

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

Analysis of Micromouse Maze Solving Algorithms

Optimal Branch Exchange for Feeder Reconfiguration in Distribution Networks

Load Balanced Optical-Network-Unit (ONU) Placement Algorithm in Wireless-Optical Broadband Access Networks

Complete Solution of the Eight-Puzzle and the Benefit of Node Ordering in IDA*

Near Optimal Solutions

Introduction Solvability Rules Computer Solution Implementation. Connect Four. March 9, Connect Four

Static Load Balancing

6. Cholesky factorization

SOLVING PROBLEMS BY SEARCHING

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

Integer Programming: Algorithms - 3

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Chapter 3. if 2 a i then location: = i. Page 40

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Lecture. Simulation and optimization

Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design

Special Situations in the Simplex Algorithm

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

Kenken For Teachers. Tom Davis June 27, Abstract

Applied Algorithm Design Lecture 5

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

3 Some Integer Functions

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

M. Sugumaran / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (3), 2011,

Binary Image Reconstruction

Practical Graph Mining with R. 5. Link Analysis

Embedded Systems 20 BF - ES

Random Map Generator v1.0 User s Guide

Solving Linear Systems, Continued and The Inverse of a Matrix

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Midterm Practice Problems

Graph Theory Problems and Solutions

Three Effective Top-Down Clustering Algorithms for Location Database Systems

Embedded Systems 20 REVIEW. Multiprocessor Scheduling

PARALLELIZED SUDOKU SOLVING ALGORITHM USING OpenMP

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Social Media Mining. Graph Essentials

K-Cover of Binary sequence

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014

Y. Xiang, Constraint Satisfaction Problems

OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION

On the effect of forwarding table size on SDN network utilization

Product Geometric Crossover for the Sudoku Puzzle

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Level 2 Routing: LAN Bridges and Switches

Similarity and Diagonalization. Similar Matrices

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Empirically Identifying the Best Genetic Algorithm for Covering Array Generation

Manifold Learning Examples PCA, LLE and ISOMAP

Example Problems. Channel Routing. Lots of Chip Real-estate. Same connectivity, much less space. Slide 1. Slide 2

An Effective Approach for Distributed Meeting Scheduler

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

WAN Wide Area Networks. Packet Switch Operation. Packet Switches. COMP476 Networked Computer Systems. WANs are made of store and forward switches.

Chapter 13: Binary and Mixed-Integer Programming

Factoring Algorithms

CMPSCI611: Approximating MAX-CUT Lecture 20

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

THE SEARCH FOR NATURAL DEFINABILITY IN THE TURING DEGREES

Building Alternate Multicasting Trees in MPLS Networks

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Week_11: Network Models

Problem Set 7 Solutions

Load balancing Static Load Balancing

Greedy randomised adaptive search procedures for topological design of MPLS networks

Greedy Routing on Hidden Metric Spaces as a Foundation of Scalable Routing Architectures

Degrees of Separation in Social Networks

5.1 Bipartite Matching

Network (Tree) Topology Inference Based on Prüfer Sequence

Picture Maze Generation by Successive Insertion of Path Segment

Learning to Search More Efficiently from Experience: A Multi-Heuristic Approach

The Goldberg Rao Algorithm for the Maximum Flow Problem

Dynamic Programming. Lecture Overview Introduction

Euler Paths and Euler Circuits

Enumerating possible Sudoku grids

Elementary Number Theory and Methods of Proof. CSE 215, Foundations of Computer Science Stony Brook University

Loop Invariants and Binary Search

A* with Partial Expansion for large branching factor problems

New Admissible Heuristics for Domain-Independent Planning

A New Forwarding Policy for Load Balancing in Communication Networks

Dmitri Krioukov CAIDA/UCSD

Self Organizing Maps: Fundamentals

Reliably computing cellular automaton, in 1-sparse noise

Transcription:

Outline Informed search algorithms Chapter 4 Best-first search A * search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms Best-first search Idea: use an evaluation function f(n) for each node estimate of "desirability" Expand most desirable unexpanded node Implementation: Order the nodes in fringe in decreasing order of desirability Special cases: greedy best-first search A * search Evaluation function f(n) = h(n) (heuristic) h(n) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Bucharest expands the node that appears to be closest to goal 1

Properties of greedy best-first search Complete? No can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Time?O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory Optimal? No A * search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal 2

Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A * using TREE- SEARCH is optimal. Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) =? h 2 (S) =? 3

Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) =? 8 h 2 (S) =? 3+1+2+2+2+3+3+2 = 18 Dominance If h 2 (n) h 1 (n) for all n (both admissible) then h 2 dominates h 1 h 2 is better for search Typical search costs (average number of nodes expanded): d=12 IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution Optimality of A * A * expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=f i, where f i < f i+1 Properties of A* Complete? Yes (unless h is not admissible or there are infinitely many nodes with f f(g) ) Time? Exponential Space? Keeps all nodes in memory Optimal? Yes 4

Proof of Theorem Theorem: If h(n) is admissible, A * using TREE-SEARCH is optimal. Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(g 2 ) = g(g 2 ) since h(g 2 ) = 0 g(g) < g(g 2 ) since G 2 is suboptimal f(g) = g(g) since h(g) = 0 f(g) < f(g 2 ) from above h(n) h*(n) since h is admissible g(n) + h(n) g(n) + h * (n) f(n) f(g) f(n) < f(g 2 ) Hence f(g 2 ) > f(n), and A * will never select G 2 for expansion. Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal. Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n- queens In such cases, we can use local search algorithms keep a single "current" state, try to improve it Example: n-queens Put n queens on an n n board with no two queens on the same row, column, or diagonal Representations of 8-Queens Hill-climbing search "Like climbing Everest in thick fog with amnesia Utility Function: Number of pairs of attacking queens. For the above, 4 pairs of attacking queens. Neighbors: the states obtained by one operation. E.g.: Switching two columns will produce n(n-1)/2 neighbors. 5

Hill-climbing search Problem: depending on initial state, can get stuck in local maxima Hill-climbing search: 8-queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state Hill-climbing search: 8-queens problem Local beam search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated A local minimum with h = 1 If any one is a goal state, stop; else select the k best successors from the complete list and repeat. 6