Local search algorithms

Similar documents
Informed search algorithms. Chapter 4, Sections 1 2 1

CS MidTerm Exam 4/1/2004 Name: KEY. Page Max Score Total 139

Smart Graphics: Methoden 3 Suche, Constraints

SOLVING PROBLEMS BY SEARCHING

Problem solving and search

Genetic Algorithm. Based on Darwinian Paradigm. Intrinsically a robust search and optimization mechanism. Conceptual Algorithm

Example Problems. Channel Routing. Lots of Chip Real-estate. Same connectivity, much less space. Slide 1. Slide 2

Integer Programming: Algorithms - 3

Lecture. Simulation and optimization

1 Review of Newton Polynomials

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Overview. Swarms in nature. Fish, birds, ants, termites, Introduction to swarm intelligence principles Particle Swarm Optimization (PSO)

DATA ANALYSIS II. Matrix Algorithms

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

A Non-Linear Schema Theorem for Genetic Algorithms

Hidden Markov Models

Inteligencia Artificial Representación del conocimiento a través de restricciones (continuación)

Blind Security Testing

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

STUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1

Genetic Algorithms commonly used selection, replacement, and variation operators Fernando Lobo University of Algarve

Y. Xiang, Constraint Satisfaction Problems

Empirically Identifying the Best Genetic Algorithm for Covering Array Generation

Solutions to Homework 6

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

EA and ACO Algorithms Applied to Optimizing Location of Controllers in Wireless Networks

A Review And Evaluations Of Shortest Path Algorithms

Dynamic Programming. Lecture Overview Introduction

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

Cellular Automaton: The Roulette Wheel and the Landscape Effect

Linear Threshold Units

Romania. Analyst Presentation. 22 nd October 2015

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

CS 3719 (Theory of Computation and Algorithms) Lecture 4

Genetic programming with regular expressions

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

Introduction to Markov Chain Monte Carlo

Lab 4: 26 th March Exercise 1: Evolutionary algorithms

Lab 11. Simulations. The Concept

CMPSCI611: Approximating MAX-CUT Lecture 20

1 Determinants and the Solvability of Linear Systems

Big Data Analytics CSCI 4030

Notes on Continuous Random Variables

Design of LDPC codes

Map Patterns and Finding the Strike and Dip from a Mapped Outcrop of a Planar Surface

College of information technology Department of software

CSCI567 Machine Learning (Fall 2014)

Self Organizing Maps: Fundamentals

PLAANN as a Classification Tool for Customer Intelligence in Banking

The Network Structure of Hard Combinatorial Landscapes

Lecture 3: Linear methods for classification

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

A Service Revenue-oriented Task Scheduling Model of Cloud Computing

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Plot the following two points on a graph and draw the line that passes through those two points. Find the rise, run and slope of that line.

Drawing Lines with Pixels. Joshua Scott March 2012

Performance Optimization of I-4 I 4 Gasoline Engine with Variable Valve Timing Using WAVE/iSIGHT

5.1 Bipartite Matching

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy

Original Article Efficient Genetic Algorithm on Linear Programming Problem for Fittest Chromosomes

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Introduction To Genetic Algorithms


Hybrid Evolution of Heterogeneous Neural Networks

Graph Theory Origin and Seven Bridges of Königsberg -Rhishikesh

At the skate park on the ramp

Manifold Learning Examples PCA, LLE and ISOMAP

Load balancing Static Load Balancing

EdExcel Decision Mathematics 1

Using Artifical Neural Networks to Model Opponents in Texas Hold'em

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Approximation Algorithms

Package COSINE. February 19, 2015

The PageRank Citation Ranking: Bring Order to the Web

Guessing Game: NP-Complete?

Work and Energy. Physics 1425 Lecture 12. Michael Fowler, UVa

A Systematic Approach to Model-Guided Empirical Search for Memory Hierarchy Optimization

The Problem of Scheduling Technicians and Interventions in a Telecommunications Company

AP: LAB 8: THE CHI-SQUARE TEST. Probability, Random Chance, and Genetics

D-optimal plans in observational studies


D A T A M I N I N G C L A S S I F I C A T I O N

5 Directed acyclic graphs

1 Review of Least Squares Solutions to Overdetermined Systems

3 Some Integer Functions

Arrangements And Duality

Classification by Pairwise Coupling

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Karagpur

Impact of Remote Control Failure on Power System Restoration Time

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

MATH 140 Lab 4: Probability and the Standard Normal Distribution

Cross-validation for detecting and preventing overfitting

The Graphical Method: An Example

SECTIONS NOTES ON GRAPH THEORY NOTATION AND ITS USE IN THE STUDY OF SPARSE SYMMETRIC MATRICES

A Study of Crossover Operators for Genetic Algorithm and Proposal of a New Crossover Operator to Solve Open Shop Scheduling Problem

Transcription:

Last update: February 2, 2010 Local search algorithms CMSC 421: Chapter 4, Sections 3 4 CMSC 421: Chapter 4, Sections 3 4 1

Iterative improvement algorithms In many optimization problems, the path to a goal is irrelevant; the goal state itself is the solution Then state space = a set of goal states find one that satisfies constraints (e.g., no two classes at same time) or, find optimal one (e.g., highest possible value, least possible cost) In such cases, can use iterative improvement algorithms; keep a single current state, try to improve it Constant space Suitable for online as well as offline search CMSC 421: Chapter 4, Sections 3 4 2

Example: the n-queens Problem Put n queens on an n n chessboard No two queens on the same row, column, or diagonal Iterative improvement: Start with one queen in each column move a queen to reduce number of conflicts h = 5 h = 2 h = 0 Even for very large n (e.g., n = 1 million), this usually finds a solution almost instantly CMSC 421: Chapter 4, Sections 3 4 3

Example: Traveling Salesperson Problem Given a complete graph (edges between all pairs of nodes) A tour is a cycle that visits every node exactly once Find a least-cost tour (simple cycle that visits each city exactly once) Iterative improvement: Start with any tour, perform pairwise exchanges Variants of this approach get within 1% of optimal very quickly with thousands of cities CMSC 421: Chapter 4, Sections 3 4 4

Outline Hill-climbing Simulated annealing Genetic algorithms (briefly) Local search in continuous spaces (very briefly) CMSC 421: Chapter 4, Sections 3 4 5

Hill-climbing (or gradient ascent/descent) Like climbing Everest in thick fog with amnesia function Hill-Climbing( problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current Make-Node(Initial-State[problem]) loop do neighbor a highest-valued successor of current if Value[neighbor] Value[current] then return State[current] current neighbor end At each step, move to a neighbor of higher value in hopes of getting to a solution having the highest possible value Can easily modify this for problems where we want to minimize rather than maximize CMSC 421: Chapter 4, Sections 3 4 6

Hill-climbing, continued Useful to consider state space landscape objective function global maximum shoulder local maximum "flat" local maximum current state state space Random-restart hill climbing: repeat with randomly chosen starting points Russell & Norvig say it s trivially complete; they re almost right If finitely many local maxima, then limrestarts P (complete) = 1 CMSC 421: Chapter 4, Sections 3 4 7

Simulated annealing Idea: escape local maxima by allowing some bad moves but gradually decrease their size and frequency function Simulated-Annealing( problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature local variables: current, a node next, a node T, a temperature controlling prob. of downward steps current Make-Node(Initial-State[problem]) for i 1 to do T schedule[i] if T = 0 then return current next a randomly selected successor of current E Value[next] Value[current] if E > 0 then current next else with probability e E/T, set current next CMSC 421: Chapter 4, Sections 3 4 8

A simple example Each state is a number x [0, 1], initial state is 0, all states are neighbors, Value(x) = x 2, 100 iterations, schedule[i] = 10 0.9 i function Simulated-Annealing( problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to temperature local variables: current, a node next, a node T, a temperature controlling prob. of downward steps current Make-Node(Initial-State[problem]) for i 1 to do T schedule[i] if T = 0 then return current next a randomly selected successor of current E Value[next] Value[current] if E > 0 then current next else with probability e E/T, set current next CMSC 421: Chapter 4, Sections 3 4 9

Simple example, continued 100 iterations, each state is a number x [0, 1], initial state is x = 0, Value(x) = x 2, all states are neighbors, schedule[i] = 10 0.9 i CMSC 421: Chapter 4, Sections 3 4 10

Properties of simulated annealing At fixed temperature T, probability of being in any given state x reaches Boltzman distribution p(x) = αe E(x) kt for every state x other than x and for small T, p(x )/p(x) = e E(x ) E(x) kt /e kt = e E(x ) E(x) kt 1 From this it can be shown that if we decrease T slowly enough, Pr[reach x ] approaches 1 Devised by Metropolis et al., 1953, for physical process modelling Widely used in VLSI layout, airline scheduling, etc. CMSC 421: Chapter 4, Sections 3 4 11

Local beam search function Beam-Search( problem, k) returns a solution state start with k randomly generated states loop generate all successors of all k states if any of them is a solution then return it else select the k best successors Not the same as k parallel searches Searches that find good states will recruit other searches to join them Problem: often all k states end up on same local hill Stochastic beam search: choose k successors randomly, biased towards good ones Close analogy to natural selection CMSC 421: Chapter 4, Sections 3 4 12

Genetic algorithms Genetic algorithms = stochastic local beam search + generate successors from pairs of states Each state should be a string of characters; Substrings should be meaningful components Example: n-queens problem i th character = row where i th queen is located + = 672 47588 752 51447 672 51447 CMSC 421: Chapter 4, Sections 3 4 13

Genetic algorithms Genetic algorithms = stochastic local beam search + generate successors from pairs of states 24748552 24 31% 32752411 32748552 32748152 32752411 23 29% 24748552 24752411 24752411 24415124 20 26% 32752411 32752124 32252124 32543213 11 14% 24415124 24415411 24415417 Fitness Selection Pairs Cross Over Mutation Genetic algorithms biological evolution for example, real genes encode replication machinery CMSC 421: Chapter 4, Sections 3 4 14

Hill-climbing in continuous state spaces Suppose we want to put three airports in Romania what locations? 6-D state space defined by (x 1, y 2 ), (x 2, y 2 ), (x 3, y 3 ) Objective function f(x 1, y 2, x 2, y 2, x 3, y 3 ) measures desirability, e.g., sum of squared distances from each city to nearest airport Arad 118 Oradea 71 Neamt Zerind 75 151 140 Sibiu 99 Fagaras (x 1, y 1 ) Rimnicu Vilcea Timisoara 111 Lugoj 70 Mehadia 75 120 Dobreta 80 (x 3, y 3 ) 211 97 Pitesti 146 101 85 138 Bucharest 90 Craiova Giurgiu 87 Iasi 92 142 98 Urziceni (x 2, y 2 ) Vaslui Hirsova 86 Eforie Straight line distance to Bucharest Arad 366 Bucharest 0 Craiova 160 Dobreta 242 Eforie 161 Fagaras 178 Giurgiu 77 Hirsova 151 Iasi 226 Lugoj 244 Mehadia 241 Neamt 234 Oradea 380 Pitesti 98 Rimnicu Vilcea 193 Sibiu 253 Timisoara 329 Urziceni 80 Vaslui 199 Zerind 374 CMSC 421: Chapter 4, Sections 3 4 15

Hill-climbing in continuous state spaces A technique from numerical analysis: Given a surface z = f(x, y), and a point (x, y), a gradient is a vector f(x, y) = f x, f y The vector points in the direction of the steepest slope, and its length is proportional to the slope. Gradient methods compute f and use it to increase/reduce f, e.g., by x x α f(x) If f = 0 then you ve reached a local maximum/minimum CMSC 421: Chapter 4, Sections 3 4 16

Hill-climbing in continuous state spaces Suppose we want to put three airports in Romania what locations? f = f, f, f, f, f, f x 1 y 1 x 2 y 2 x 3 y 3 Look for x 1, y 1, x 2, y 2, x 3, y 3 such that f(x 1, y 1, x 2, y 2, x 3, y 3 ) = 0 Arad 118 Oradea 71 Neamt Zerind 75 151 140 Sibiu 99 Fagaras (x 1, y 1 ) Rimnicu Vilcea Timisoara 111 Lugoj 70 Mehadia 75 120 Dobreta 80 (x 3, y 3 ) 211 97 Pitesti 146 101 85 138 Bucharest 90 Craiova Giurgiu 87 Iasi 92 142 98 Urziceni (x 2, y 2 ) Vaslui Hirsova 86 Eforie Straight line distance to Bucharest Arad 366 Bucharest 0 Craiova 160 Dobreta 242 Eforie 161 Fagaras 178 Giurgiu 77 Hirsova 151 Iasi 226 Lugoj 244 Mehadia 241 Neamt 234 Oradea 380 Pitesti 98 Rimnicu Vilcea 193 Sibiu 253 Timisoara 329 Urziceni 80 Vaslui 199 Zerind 374 CMSC 421: Chapter 4, Sections 3 4 17

Continuous state spaces, continued Sometimes can solve for f(x) = 0 exactly (e.g., with one city) Newton Raphson (1664, 1690) iterates x x H 1 f (x) f(x) to solve f(x) = 0, where H ij = 2 f/ x i x j Discretization methods turn continuous space into discrete space e.g., empirical gradient considers ±δ change in each coordinate CMSC 421: Chapter 4, Sections 3 4 18

Homework Problems 4.1, 4.2, 4.9 (but you don t need to suggest a way to calculate it) 4.11, 4.12 10 points each, 50 points total Due in one week CMSC 421: Chapter 4, Sections 3 4 19