Game playing. Chapter 6. Chapter 6 1



Similar documents
Mastering Quoridor. Lisa Glendenning THESIS. Submitted in Partial Fulfillment of the Requirements for the Degree of

10. Machine Learning in Games

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / October 3, 2012

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2014

Game Playing in the Real World. Next time: Knowledge Representation Reading: Chapter

Measuring the Performance of an Agent

Monte-Carlo Methods. Timo Nolle

Laboratory work in AI: First steps in Poker Playing Agents and Opponent Modeling

Monte Carlo Tree Search and Opponent Modeling through Player Clustering in no-limit Texas Hold em Poker

Acknowledgements I would like to thank both Dr. Carsten Furhmann, and Dr. Daniel Richardson for providing me with wonderful supervision throughout my

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

CSE 517A MACHINE LEARNING INTRODUCTION

Understanding Proactive vs. Reactive Methods for Fighting Spam. June 2003

Sequential lmove Games. Using Backward Induction (Rollback) to Find Equilibrium

Game theory and AI: a unified approach to poker games

Decision Making under Uncertainty

Minimax Strategies. Minimax Strategies. Zero Sum Games. Why Zero Sum Games? An Example. An Example

Search methods motivation 1

6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games

The Turing Test! and What Computer Science Offers to Cognitive Science "

Rafael Witten Yuze Huang Haithem Turki. Playing Strong Poker. 1. Why Poker?

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Generalized Widening

Pascal is here expressing a kind of skepticism about the ability of human reason to deliver an answer to this question.

TD-Gammon, A Self-Teaching Backgammon Program, Achieves Master-Level Play

Learning Agents: Introduction

FIRST EXPERIMENTAL RESULTS OF PROBCUT APPLIED TO CHESS

University of Alberta. Library Release Form

Roulette Wheel Selection Game Player

AP Stats - Probability Review

CS MidTerm Exam 4/1/2004 Name: KEY. Page Max Score Total 139

Game Theory 1. Introduction

Chapter 13: Binary and Mixed-Integer Programming

Theorem (informal statement): There are no extendible methods in David Chalmers s sense unless P = NP.

Lab 11. Simulations. The Concept

Enhancing Artificial Intelligence in Games by Learning the Opponent s Playing Style

Optimization in ICT and Physical Systems

Intent Based Filtering: A Proactive Approach Towards Fighting Spam

10/13/11 Solution: Minimax with Alpha-Beta Pruning and Progressive Deepening

We employed reinforcement learning, with a goal of maximizing the expected value. Our bot learns to play better by repeated training against itself.

Clock Arithmetic and Modular Systems Clock Arithmetic The introduction to Chapter 4 described a mathematical system

cs171 HW 1 - Solutions

P vs NP problem in the field anthropology

COMP30191 The Theory of Games and Game Models

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

Alok Gupta. Dmitry Zhdanov

CSC384 Intro to Artificial Intelligence

Introduction to Probability

Backward Induction and Subgame Perfection

6.042/18.062J Mathematics for Computer Science. Expected Value I

What is Artificial Intelligence?

6.231 Dynamic Programming and Stochastic Control Fall 2008

Keywords-Chess gameregistration, Profile management, Rational rose, Activities management.

15-251: Great Theoretical Ideas in Computer Science Anupam Gupta Notes on Combinatorial Games (draft!!) January 29, 2012

Motivation. Motivation. Can a software agent learn to play Backgammon by itself? Machine Learning. Reinforcement Learning

1 Representation of Games. Kerschbamer: Commitment and Information in Games

Decision Theory Rational prospecting

Played With Five Standard Six-Sided Dice. by Paul Hoemke

Bayesian Nash Equilibrium

PS 271B: Quantitative Methods II. Lecture Notes

Betting systems: how not to lose your money gambling

Microeconomic Theory Jamison / Kohlberg / Avery Problem Set 4 Solutions Spring (a) LEFT CENTER RIGHT TOP 8, 5 0, 0 6, 3 BOTTOM 0, 0 7, 6 6, 3

Using Probabilistic Knowledge and Simulation to Play Poker

Lecture 1: Introduction to Reinforcement Learning

Math Quizzes Winter 2009

Problem Set 9 Solutions

Creating a NL Texas Hold em Bot

Research Directions in Computer Game AI. Rafael Cabredo

Book One. Beginning Bridge. Supplementary quizzes and play hands edition

Advanced Game Theory

Gaming the Law of Large Numbers

Complexity Classes P and NP

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Computational Learning Theory Spring Semester, 2003/4. Lecture 1: March 2

Gerry Hobbs, Department of Statistics, West Virginia University

Playing around with Risks

J. Buisman and C. Wohlin, "Using Game Theory to Study Bidding for Software Projects", Proceedings EASE: Empirical Assessment and Evaluation in

Quantifying Skill in Games

The Taxman Game. Robert K. Moniot September 5, 2003

Circuits 1 M H Miller

Standard 12: The student will explain and evaluate the financial impact and consequences of gambling.

Lecture 11 Uncertainty

A Working Knowledge of Computational Complexity for an Optimizer

FAST INVERSE SQUARE ROOT

We never talked directly about the next two questions, but THINK about them they are related to everything we ve talked about during the past week:

What is Linear Programming?

National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy

Stat 20: Intro to Probability and Statistics

CHESS -PLAYING PROGRAMS

KNOWLEDGE-BASED MONTE-CARLO TREE SEARCH IN HAVANNAH. J.A. Stankiewicz. Master Thesis DKE 11-05

Hypothesis Testing for Beginners

On the Laziness of Monte-Carlo Game Tree Search in Non-tight Situations

Introduction To Genetic Algorithms

Expected Value. 24 February Expected Value 24 February /19

Slots seven card stud...22

Colored Hats and Logic Puzzles

The UCT Algorithm Applied to Games with Imperfect Information

Dynamic programming formulation

COMP 590: Artificial Intelligence

Probabilistic Strategies: Solutions

Transcription:

Game playing Chapter 6 Chapter 6 1

Outline Games Perfect play minimax decisions α β pruning Resource limits and approximate evaluation Games of chance Games of imperfect information Chapter 6 2

Games vs. search problems Unpredictable opponent solution is a strategy specifying a move for every possible opponent reply Time limits unlikely to find goal, must approximate Plan of attack: Computer considers possible lines of play (Babbage, 1846) Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) First chess program (Turing, 1951) Machine learning to improve evaluation accuracy (Samuel, 1952 57) Pruning to allow deeper search (McCarthy, 1956) Chapter 6 3

Types of games perfect information imperfect information deterministic chess, checkers, go, othello battleships, blind tictactoe chance backgammon monopoly bridge, poker, scrabble nuclear war Chapter 6 4

Game tree (2-player, deterministic, turns) MA () MIN (O) MA () O O O... MIN (O) O O O............... TERMINAL Utility O O O O O O O O O O 1 0 +1... Chapter 6 5

Minimax Perfect play for deterministic, perfect-information games Idea: choose move to position with highest minimax value = best achievable payoff against best play E.g., 2-ply game: MA 3 A 1 A 2 A 3 MIN 3 2 2 A 11 A 13 A 21 A 22 A 23 A 32 A 33 A 12 A 31 3 12 8 2 4 6 14 5 2 Chapter 6 6

Minimax algorithm function Minimax-Decision(state) returns an action inputs: state, current state in game return the a in Actions(state) maximizing Min-Value(Result(a, state)) function Max-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Max(v, Min-Value(s)) return v function Min-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Min(v, Max-Value(s)) return v Chapter 6 7

Properties of minimax Complete?? Chapter 6 8

Properties of minimax Complete?? Only if tree is finite (chess has specific rules for this). NB a finite strategy can exist even in an infinite tree! Optimal?? Chapter 6 9

Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? Chapter 6 10

Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(b m ) Space complexity?? Chapter 6 11

Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(b m ) Space complexity?? O(bm) (depth-first exploration) For chess, b 35, m 100 for reasonable games exact solution completely infeasible But do we need to explore every path? Chapter 6 12

α β pruning example MA 3 MIN 3 3 12 8 Chapter 6 13

α β pruning example MA 3 MIN 3 2 3 12 8 2 Chapter 6 14

α β pruning example MA 3 MIN 3 2 14 3 12 8 2 14 Chapter 6 15

α β pruning example MA 3 MIN 3 2 14 5 3 12 8 2 14 5 Chapter 6 16

α β pruning example MA 3 3 MIN 3 2 14 5 2 3 12 8 2 14 5 2 Chapter 6 17

Why is it called α β? MA MIN...... MA MIN V α is the best value (to max) found so far off the current path If V is worse than α, max will avoid it prune that branch Define β similarly for min Chapter 6 18

The α β algorithm function Alpha-Beta-Decision(state) returns an action return the a in Actions(state) maximizing Min-Value(Result(a, state)) function Max-Value(state, α, β) returns a utility value inputs: state, current state in game α, the value of the best alternative for max along the path to state β, the value of the best alternative for min along the path to state if Terminal-Test(state) then return Utility(state) v for a, s in Successors(state) do v Max(v, Min-Value(s, α, β)) if v β then return v α Max(α, v) return v function Min-Value(state, α, β) returns a utility value same as Max-Value but with roles of α, β reversed Chapter 6 19

Pruning does not affect final result Properties of α β Good move ordering improves effectiveness of pruning With perfect ordering, time complexity = O(b m/2 ) doubles solvable depth A simple example of the value of reasoning about which computations are relevant (a form of metareasoning) Unfortunately, 35 50 is still impossible! Chapter 6 20

Resource limits Standard approach: Use Cutoff-Test instead of Terminal-Test e.g., depth limit (perhaps add quiescence search) Use Eval instead of Utility i.e., evaluation function that estimates desirability of position Suppose we have 100 seconds, explore 10 4 nodes/second 10 6 nodes per move 35 8/2 α β reaches depth 8 pretty good chess program Chapter 6 21

Evaluation functions Black to move White slightly better White to move Black winning For chess, typically linear weighted sum of features Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) +... + w n f n (s) e.g., w 1 = 9 with f 1 (s) = (number of white queens) (number of black queens), etc. Chapter 6 22

Digression: Exact values don t matter MA MIN 1 2 1 20 1 2 2 4 1 20 20 400 Behaviour is preserved under any monotonic transformation of Eval Only the order matters: payoff in deterministic games acts as an ordinal utility function Chapter 6 23

Deterministic games in practice Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Chess: Deep Blue defeated human world champion Gary Kasparov in a sixgame match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. Othello: human champions refuse to compete against computers, who are too good. Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves. Chapter 6 24

Nondeterministic games: backgammon 0 1 2 3 4 5 6 7 8 9 10 11 12 25 24 23 22 21 20 19 18 17 16 15 14 13 Chapter 6 25

Nondeterministic games in general In nondeterministic games, chance introduced by dice, card-shuffling Simplified example with coin-flipping: MA CHANCE 3 1 0.5 0.5 0.5 0.5 MIN 2 4 0 2 2 4 7 4 6 0 5 2 Chapter 6 26

Algorithm for nondeterministic games Expectiminimax gives perfect play Just like Minimax, except we must also handle chance nodes:... if state is a Max node then return the highest ExpectiMinimax-Value of Successors(state) if state is a Min node then return the lowest ExpectiMinimax-Value of Successors(state) if state is a chance node then return average of ExpectiMinimax-Value of Successors(state)... Chapter 6 27

Nondeterministic games in practice Dice rolls increase b: 21 possible rolls with 2 dice Backgammon 20 legal moves (can be 6,000 with 1-1 roll) depth 4 = 20 (21 20) 3 1.2 10 9 As depth increases, probability of reaching a given node shrinks value of lookahead is diminished α β pruning is much less effective TDGammon uses depth-2 search + very good Eval world-champion level Chapter 6 28

Digression: Exact values DO matter MA DICE 2.1 1.3.9.1.9.1 21 40.9.9.1.9.1 MIN 2 3 1 4 20 30 1 400 2 2 3 3 1 1 4 4 20 20 30 30 1 1 400 400 Behaviour is preserved only by positive linear transformation of Eval Hence Eval should be proportional to the expected payoff Chapter 6 29

Games of imperfect information E.g., card games, where opponent s initial cards are unknown Typically we can calculate a probability for each possible deal Seems just like having one big dice roll at the beginning of the game Idea: compute the minimax value of each action in each deal, then choose the action with highest expected value over all deals Special case: if an action is optimal for all deals, it s optimal. GIB, current best bridge program, approximates this idea by 1) generating 100 deals consistent with bidding information 2) picking the action that wins most tricks on average Chapter 6 30

Example Four-card bridge/whist/hearts hand, Max to play first 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 6 7 0 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3 4 3 Chapter 6 31

Example Four-card bridge/whist/hearts hand, Max to play first MA MIN 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 6 7 0 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3 4 3 MA MIN 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 6 7 0 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3 4 3 Chapter 6 32

Example Four-card bridge/whist/hearts hand, Max to play first MA MIN 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 6 7 0 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3 4 3 MA MIN 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 6 7 0 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3 4 3 MA 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 4 6 7 3 0.5 MIN 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3 6 6 4 7 3 0.5 Chapter 6 33

Commonsense example Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you ll find a mound of jewels; take the right fork and you ll be run over by a bus. Chapter 6 34

Commonsense example Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you ll find a mound of jewels; take the right fork and you ll be run over by a bus. Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you ll be run over by a bus; take the right fork and you ll find a mound of jewels. Chapter 6 35

Commonsense example Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you ll find a mound of jewels; take the right fork and you ll be run over by a bus. Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you ll be run over by a bus; take the right fork and you ll find a mound of jewels. Road A leads to a small heap of gold pieces Road B leads to a fork: guess correctly and you ll find a mound of jewels; guess incorrectly and you ll be run over by a bus. Chapter 6 36

Proper analysis * Intuition that the value of an action is the average of its values in all actual states is WRONG With partial observability, value of an action depends on the information state or belief state the agent is in Can generate and search a tree of information states Leads to rational behaviors such as Acting to obtain information Signalling to one s partner Acting randomly to minimize information disclosure Chapter 6 37

Summary Games are fun to work on! (and dangerous) They illustrate several important points about AI perfection is unattainable must approximate good idea to think about what to think about uncertainty constrains the assignment of values to states optimal decisions depend on information state, not real state Games are to AI as grand prix racing is to automobile design Chapter 6 38