Constraint Satisfaction Problems C H A P T E R 6 O L I V E R S C H U L T E S U M M E R

Similar documents
Smart Graphics: Methoden 3 Suche, Constraints

Y. Xiang, Constraint Satisfaction Problems

Satisfiability Checking

Problems in Artificial Intelligence

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

Measuring the Performance of an Agent

Scheduling Shop Scheduling. Tim Nieberg

Integrating Benders decomposition within Constraint Programming

Approximation Algorithms

Heuristics for Dynamically Adapting Constraint Propagation in Constraint Programming

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

Data Mining Practical Machine Learning Tools and Techniques

Optimizing Description Logic Subsumption

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Part 2: Community Detection

CS MidTerm Exam 4/1/2004 Name: KEY. Page Max Score Total 139

A Constraint-Based Method for Project Scheduling with Time Windows

The Problem of Scheduling Technicians and Interventions in a Telecommunications Company

Classification - Examples

University of Potsdam Faculty of Computer Science. Clause Learning in SAT Seminar Automatic Problem Solving WS 2005/06

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Introduction & Overview

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

CMPSCI611: Approximating MAX-CUT Lecture 20

Minerva Access is the Institutional Repository of The University of Melbourne

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing

Disjunction of Non-Binary and Numeric Constraint Satisfaction Problems

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

Local Search and Constraint Programming for the Post Enrolment-based Course Timetabling Problem

3. The Junction Tree Algorithms

Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing

Course Outline Department of Computing Science Faculty of Science. COMP Applied Artificial Intelligence (3,1,0) Fall 2015

1. Nondeterministically guess a solution (called a certificate) 2. Check whether the solution solves the problem (called verification)

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

Load Balancing. Load Balancing 1 / 24

Guessing Game: NP-Complete?

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Single machine models: Maximum Lateness -12- Approximation ratio for EDD for problem 1 r j,d j < 0 L max. structure of a schedule Q...

Dynamic programming formulation

Search methods motivation 1

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

A search based Sudoku solver

Ant Colony Optimization and Constraint Programming

Minesweeper as a Constraint Satisfaction Problem

5.1 Bipartite Matching

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Chapter 6: Graph Theory

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Data Structures and Algorithms Written Examination

On the Efficiency of Backtracking Algorithms for Binary Constraint Satisfaction Problems

POMPDs Make Better Hackers: Accounting for Uncertainty in Penetration Testing. By: Chris Abbott

Today s topic: So far, we have talked about. Question. Task models

Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang

(Refer Slide Time: 2:03)

CIS 631 Database Management Systems Sample Final Exam

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CS473 - Algorithms I

each college c i C has a capacity q i - the maximum number of students it will admit

Small Maximal Independent Sets and Faster Exact Graph Coloring

Bounded Treewidth in Knowledge Representation and Reasoning 1

Embedded Systems 20 REVIEW. Multiprocessor Scheduling

Analysis of Micromouse Maze Solving Algorithms

PARALLELIZED SUDOKU SOLVING ALGORITHM USING OpenMP

Big Data Optimization at SAS

Discuss the size of the instance for the minimum spanning tree problem.

A Fast Algorithm For Finding Hamilton Cycles

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

WORKFLOW ENGINE FOR CLOUDS

ACO Hypercube Framework for Solving a University Course Timetabling Problem

CONSISTENCY METHODS FOR TEMPORAL REASONING. Lin Xu A THESIS. Presented to the Faculty of. The Graduate College at the University of Nebraska

Lecture 2 Introduction to Data Flow Analysis

VISUALIZING HIERARCHICAL DATA. Graham Wills SPSS Inc.,

Lecture. Simulation and optimization

INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET)

Subgraph Patterns: Network Motifs and Graphlets. Pedro Ribeiro

The Trip Scheduling Problem

Concurrency Control. Chapter 17. Comp 521 Files and Databases Fall

Complexity Classes P and NP

CAD Algorithms. P and NP

Investment Analysis using the Portfolio Analysis Machine (PALMA 1 ) Tool by Richard A. Moynihan 21 July 2005

Chapter 11 Monte Carlo Simulation

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Hierarchical Data Visualization

Instruction scheduling

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Project Scheduling in Software Development

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

NEW applications of wireless multi-hop networks, such

Constraint Optimization for Highly Constrained Logistic Problems

On the effect of forwarding table size on SDN network utilization

Outline. IEEM241 Routing and Fleet Management. Course - Objectives. Background. Course summary Essence of decision models Decision models - examples

CSE 4351/5351 Notes 7: Task Scheduling & Load Balancing

Fairness in Routing and Load Balancing

Optimization of ETL Work Flow in Data Warehouse

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

Embedded Systems 20 BF - ES

Transcription:

Constraint Satisfaction Problems C H A P T E R 6 O L I V E R S C H U L T E S U M M E R 2 0 1 1

Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs

Environment Type Discussed In this Lecture 3 Fully Observable yes Static Environment yes Discrete Planning, heuristic search yes CMPT 310 - Blind Search Deterministic yes Sequential no Control, cybernetics no Discrete yes Vector Search: Constraint Satisfaction no Continuous Function Optimization

Agent Architecture Discussed In this Lecture B C B C A model is a structured representation of the world. (a) Atomic (b) Factored Graph-Based Search: State is black box, no internal structure, atomic. Factored Representation: State is list or vector of facts. CSP: a fact is of the form Variable = value.

Constraint satisfaction problems (CSPs) CSP: state is defined by variables X i with values from domain D i goal test is a set of constraints specifying allowable combinations of values for subsets of variables. Allows useful general-purpose algorithms with more power than standard search algorithms. Power close to simulating Turing Machines.

Example: Map-Coloring

CSPs (continued) An assignment is complete when every variable is mentioned. A solution to a CSP is a complete assignment that satisfies all constraints. Some CSPs require a solution that maximizes an objective function. Constraints with continuous variables are common. Linear Constraints linear programming. Examples of Applications: Airline schedules Final Exam Scheduling. Cryptography Sudoku, cross-words.

Example: Map-Coloring contd.

Varieties of constraints Unary constraints involve a single variable, e.g., SA 6= green Binary constraints involve pairs of variables, e.g., SA <> WA Higher-order constraints involve 3 or more variables Preferences (soft constraints), e.g., red is better than green often representable by a cost for each variable assignment constrained optimization problems

Constraint graph Binary CSP: each constraint relates at most two variables Constraint graph: nodes are variables, arcs show constraints General-purpose CSP algorithms use the graph structure to speed up search. E.g., Tasmania is an independent subproblem!

Graphs and Factored Representations UBC AI Space CSP Graphs for variables (concepts, facts) capture local dependencies between variables (concepts, facts). Absence of edges = independence. AI systems try to reason locally as much as possible. Potential Solution to the Relevance Problem: How does the brain retrieve relevant facts in a given situation, out of the million facts that it knows? Answer: Direct links represent direct relevance. Computation in general is a local process operating on a factored state. (What is the state of a program run?)

Problem b Consider the constraint graph on the right. The domain for every variable is [1,2,3,4]. a c e There are 2 unary constraints: - variable a cannot take values 3 and 4. - variable b cannot take value 4. d There are 8 binary constraints stating that variables connected by an edge cannot have the same value.

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {1,2,3,4} 1 2 3 4 X3 {1,2,3,4} X4 {1,2,3,4} 5 Queen's Problem

Standard search formulation (incremental) Let s formulate a state graph problem, then take advantage of special structure later. States are defined by the values assigned so far Initial state: the empty assignment, { } Successor function: assign a value to an unassigned variable that does not conflict with current assignment. fail if no legal assignments (not fixable!) Goal test: the current assignment is complete This is the same for all CSPs!

Standard search formulation (incremental) Can we use breadth first search? Branching factor at top level? nd any of the d values can be assigned to any variable Next level? (n-1)d We generate n!.d n leaves even though there are d n complete assignments. Why? Commutatively If the order of applications on any given set of actions has no effect on the outcome.

Backtracking search Variable assignments are commutative, i.e., [WA=red then NT =green] same as [NT =green thenwa=red] Only need to consider assignments to a single variable at each node b=d and there are d n leaves Depth-first search for CSPs with single-variable assignments is called backtracking search Is this uninformed or informed? Backtracking search is the basic uninformed algorithm for CSPs

Improving backtracking efficiency General-purpose methods can give huge gains in speed: 18 Which variable should be assigned next? In what order should its values be tried? Can we detect inevitable failure early? Constraint Learning: Can we keep track of what search has learned? Can we take advantage of problem structure? CS 3243 - Constraint Satisfaction 4 Feb 2004

Backtracking example 19 CS 3243 - Constraint Satisfaction 4 Feb 2004

Backtracking example 20 CS 3243 - Constraint Satisfaction 4 Feb 2004

Backtracking example 21 CS 3243 - Constraint Satisfaction 4 Feb 2004

Backtracking example 22 CS 3243 - Constraint Satisfaction 4 Feb 2004

Most constrained variable Most constrained variable: choose the variable with the fewest legal values a.k.a. minimum remaining values (MRV) heuristic 23 Only picks a variable (Not a value) Demo for MRV CS 3243 - Constraint Satisfaction 4 Feb 2004

Most constraining variable How to choose between the variable with the fewest legal values? 24 Tie-breaker among most constrained variables Degree heuristic: choose the variable with the most constraints on remaining variables CS 3243 - Constraint Satisfaction 4 Feb 2004

Least constraining value 25 Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables. Intuition: choose most likely solution. Combining these heuristics makes 1000 queens feasible CS 3243 - Constraint Satisfaction 4 Feb 2004

Idea: Forward checking Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values 26 CS 3243 - Constraint Satisfaction 4 Feb 2004

Idea: Forward checking Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values 27 CS 3243 - Constraint Satisfaction 4 Feb 2004

Idea: Forward checking Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values 28 CS 3243 - Constraint Satisfaction 4 Feb 2004

Idea: Forward checking Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values 29 CS 3243 - Constraint Satisfaction 4 Feb 2004

Constraint propagation Forward checking propagates information from assigned to unassigned variables, but doesn't provide early detection for all failures: 30 NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally. Has to be faster than searching CS 3243 - Constraint Satisfaction 4 Feb 2004

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {1,2,3,4} 1 2 3 4 X3 {1,2,3,4} X4 {1,2,3,4}

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {1,2,3,4} 1 2 3 4 X3 {1,2,3,4} X4 {1,2,3,4}

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 1 2 3 4 X3 {,2,,4} X4 {,2,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 1 2 3 4 X3 {,2,,4} X4 {,2,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 1 2 3 4 X3 {,,, } X4 {,,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,,4} 1 2 3 4 X3 {,2,,4} X4 {,2,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,,4} 1 2 3 4 X3 {,2,,4} X4 {,2,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,,4} 1 2 3 4 X3 {,2,, } X4 {,,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,,4} 1 2 3 4 X3 {,2,, } X4 {,,3, }

Example: 4-Queens Problem 1 2 3 4 X1 {1,2,3,4} X2 {,,3,4} 1 2 3 4 X3 {,2,, } X4 {,,, }

Constraint propagation Techniques like CP and FC are in effect eliminating parts of the search space Inference complements search (= simulation). Constraint propagation goes further than FC by repeatedly enforcing constraints locally. Arc-consistency (AC) is a systematic procedure for Constraint propagation (Macworth 1977 UBC).

Arc consistency An Arc X Y is consistent if for every value x of X there is some value y consistent with x (note that this is a directed property) Consider state of search after WA and Q are assigned: SA NSW is consistent if SA=blue and NSW=red

Arc consistency X Y is consistent if for every value x of X there is some value y consistent with x NSW SA is consistent if NSW=red and SA=blue NSW=blue and SA=???

Arc consistency Can enforce arc-consistency: Arc can be made consistent by removing blue from NSW Continue to propagate constraints. Check V NSW Not consistent for V = red Remove red from V

Arc consistency Continue to propagate constraints. SA NT is not consistent and cannot be made consistent Arc consistency detects failure earlier than FC

Arc consistency checking Can be run as a preprocessor or after each assignment Or as preprocessing before search starts AC must be run repeatedly until no inconsistency remains Trade-off Requires some overhead to do, but generally more effective than direct search In effect it can eliminate large (inconsistent) parts of the state space more effectively than search can Need a systematic method for arc-checking If X loses a value, neighbors of X need to be rechecked.

Arc consistency checking

Back-tracking or back-jumping? {Q=red, NSW= green, V= blue, T=red} red green? blue red blue green

Local search for CSPs Use complete-state representation Initial state = all variables assigned values Successor states = change 1 (or more) values For CSPs allow states with unsatisfied constraints (unlike backtracking) operators reassign variable values hill-climbing with n-queens is an example Variable selection: randomly select any conflicted variable. Local Stochastic Search Demo Value selection: min-conflicts heuristic Select new value that results in a minimum number of conflicts with the other variables

Local search for CSP function MIN-CONFLICTS(csp, max_steps) return solution or failure inputs: csp, a constraint satisfaction problem max_steps, the number of steps allowed before giving up current an initial complete assignment for csp for i = 1 to max_steps do if current is a solution for csp then return current var a randomly chosen, conflicted variable from VARIABLES[csp] value the value v for var that minimize CONFLICTS (var,v,current,csp) set var = value in current return failure

Min-conflicts example 1 h=5 h=3 h=1 Use of min-conflicts heuristic in hill-climbing.

Min-conflicts example 2 A two-step solution for an 8-queens problem using min-conflicts heuristic At each stage a queen is chosen for reassignment in its column The algorithm moves the queen to the min-conflict square breaking ties randomly.

Advantages of local search Local search can be particularly useful in an online setting Airline schedule example E.g., mechanical problems require than 1 plane is taken out of service Can locally search for another close solution in state-space Much better (and faster) in practice than finding an entirely new schedule. The runtime of min-conflicts is roughly independent of problem size. Can solve the millions-queen problem in roughly 50 steps. Why? n-queens is easy for local search because of the relatively high density of solutions in state-space.

Graph structure and problem complexity Divide-and-conquer: Solving disconnected subproblems. Suppose each subproblem has c variables out of a total of n. Worst case solution cost is O(n/c d c ), i.e. linear in n Instead of O(d n ), exponential in n E.g. n= 80, c= 20, d=2 2 80 = 4 billion years at 1 million nodes/ sec. 4 * 2 20 =.4 second at 1 million nodes/ sec

Tree-structured CSPs Theorem: if a constraint graph has no loops then the CSP can be solved in O(nd 2 ) time linear in the number of variables! Compare difference with general CSP, where worst case is O(d n )

Algorithm for Solving Tree-structured CSPs Choose some variable as root, order variables from root to leaves such that every node s parent precedes it in the ordering. Label variables from X 1 to X n. Every variable now has 1 parent Backward Pass For j from n down to 2, apply arc consistency to arc [Parent(X j ), X j ) ] Remove values from Parent(X j ) if needed to make graph directed arc consistent. Forward Pass For j from 1 to n assign X j consistently with Parent(X j )

Tree CSP Example G B

Tree CSP Example Backward Pass (constraint propagation) B R G B G B R G R G B

Tree CSP Example Backward Pass (constraint propagation) B R G B G B R G R G B Forward Pass (assignment) B G R R G B

Tree CSP complexity Backward pass n arc checks Each has complexity d 2 at worst Forward pass n variable assignments, O(nd) Overall complexity is O(nd 2 ) Algorithm works because if the backward pass succeeds, then every variable by definition has a legal assignment in the forward pass

What about non-tree CSPs? General idea is to convert the graph to a tree 2 general approaches 1. Assign values to specific variables (Cycle Cutset method). 1. Tries to exploit context-specific independence. 2. Construct a tree-decomposition of the graph - Connected subproblems (subgraphs) becomes nodes in a tree structure.

Tree Decompositions Red, green, blue Red, blue, green, blue, red, green Red, green, blue Red, blue, green, blue, red, green

Rules for a Tree Decomposition Every variable appears in at least one of the subproblems. If two variables are connected in the original problem, they must appear together (with the constraint) in at least one subproblem. If a variable appears in two subproblems, it must appear in each node on the path between them.

Tree Decomposition Algorithm View each subproblem as a super-variable Domain = set of solutions for the subproblem Obtained by running a CSP on each subproblem. Maximum-size of subproblem = treewidth of constraint graph. E.g., 6 solutions for 3 fully connected variables in map problem Now use the tree CSP algorithm to solve the constraints connecting the subproblems Declare a subproblem a root node, create tree Backward and forward passes Example of divide and conquer strategy

Tree Decomposition Every graph has a tree decomposition, not just constraint graphs. Tree decomposition shows optimal divide-andconquer strategy. For CSPs, optimization, search, inference. Typical result: if treewidth of a problem graph is constant, then the search/optimization/inference problem is solvable by divide and conquer.

Summary CSPs special kind of problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values: factored representation. Backtracking=depth-first search with one variable assigned per node Heuristics Variable ordering and value selection heuristics help significantly Constraint propagation does additional work to constrain values and detect inconsistencies Works effectively when combined with heuristics Iterative min-conflicts is often effective in practice. Graph structure of CSPs determines problem complexity e.g., tree structured CSPs can be solved in linear time.