10. Machine Learning in Games



Similar documents
CSE 517A MACHINE LEARNING INTRODUCTION

Game playing. Chapter 6. Chapter 6 1

Reinforcement learning in board games.

TD-Gammon, A Self-Teaching Backgammon Program, Achieves Master-Level Play

University of Alberta. Library Release Form

Learning Agents: Introduction

Monte Carlo Tree Search and Opponent Modeling through Player Clustering in no-limit Texas Hold em Poker

Laboratory work in AI: First steps in Poker Playing Agents and Opponent Modeling

Game Playing in the Real World. Next time: Knowledge Representation Reading: Chapter

What is Learning? CS 391L: Machine Learning Introduction. Raymond J. Mooney. Classification. Problem Solving / Planning / Control

Motivation. Motivation. Can a software agent learn to play Backgammon by itself? Machine Learning. Reinforcement Learning

CSC384 Intro to Artificial Intelligence

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / October 3, 2012

Using Probabilistic Knowledge and Simulation to Play Poker

Chess Algorithms Theory and Practice. Rune Djurhuus Chess Grandmaster / September 23, 2014

FIRST EXPERIMENTAL RESULTS OF PROBCUT APPLIED TO CHESS

Expected Value. 24 February Expected Value 24 February /19

Machine Learning CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Understanding Proactive vs. Reactive Methods for Fighting Spam. June 2003

Lecture 1: Introduction to Reinforcement Learning

Machine Learning Introduction

What is Artificial Intelligence?

Game theory and AI: a unified approach to poker games

Machine Learning and Data Mining. Fundamentals, robotics, recognition

Keywords-Chess gameregistration, Profile management, Rational rose, Activities management.

Learning is a very general term denoting the way in which agents:

Big Data Analysis. Rajen D. Shah (Statistical Laboratory, University of Cambridge) joint work with Nicolai Meinshausen (Seminar für Statistik, ETH

Mastering Quoridor. Lisa Glendenning THESIS. Submitted in Partial Fulfillment of the Requirements for the Degree of

Rafael Witten Yuze Huang Haithem Turki. Playing Strong Poker. 1. Why Poker?

The 7 th Computer Olympiad

Monte-Carlo Methods. Timo Nolle

A Defensive Strategy Combined with. Threat-Space Search for Connect6

Artificial Intelligence (AI)

Intent Based Filtering: A Proactive Approach Towards Fighting Spam

Acknowledgements I would like to thank both Dr. Carsten Furhmann, and Dr. Daniel Richardson for providing me with wonderful supervision throughout my

Chapter 6. The stacking ensemble approach

Enhancing Artificial Intelligence in Games by Learning the Opponent s Playing Style

Evolutionary Game Design

Introduction to Machine Learning Lecture 1. Mehryar Mohri Courant Institute and Google Research

Checkers Is Solved. *To whom correspondence should be addressed.

Machine Learning. Chapter 18, 21. Some material adopted from notes by Chuck Dyer

Research Directions in Computer Game AI. Rafael Cabredo

Optimization in ICT and Physical Systems

D A T A M I N I N G C L A S S I F I C A T I O N

Network Machine Learning Research Group. Intended status: Informational October 19, 2015 Expires: April 21, 2016

Gamesman: A Graphical Game Analysis System

Source.

CS440/ECE448: Artificial Intelligence. Course website:

Search methods motivation 1

COMP 590: Artificial Intelligence

Playing around with Risks

Roman Board Games. I. Lusoria

Artificial Intelligence

Temporal Difference Learning of Position Evaluation in the Game of Go

Machine Learning and Statistics: What s the Connection?

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing

Game Theory 1. Introduction

Football Match Winner Prediction

How to Win Texas Hold em Poker

History of Artificial Intelligence. Introduction to Intelligent Systems

UNIVERSALITY IS UBIQUITOUS

University of Alberta

Data Mining Practical Machine Learning Tools and Techniques

Counting the Score: Position Evaluation in Computer Go

How I won the Chess Ratings: Elo vs the rest of the world Competition

Adaptive play in Texas Hold em Poker

A Visualization is Worth a Thousand Tables: How IBM Business Analytics Lets Users See Big Data

Command a host of forces into battle, in almost any era.

We employed reinforcement learning, with a goal of maximizing the expected value. Our bot learns to play better by repeated training against itself.

Tartanian5: A Heads-Up No-Limit Texas Hold em Poker-Playing Program

Acting humanly: The Turing test. Artificial Intelligence. Thinking humanly: Cognitive Science. Outline. What is AI?

Introduction Solvability Rules Computer Solution Implementation. Connect Four. March 9, Connect Four

DESPITE the many advances in machine learning and

How To Understand Artificial Intelligence

Machine Learning. Mausam (based on slides by Tom Mitchell, Oren Etzioni and Pedro Domingos)

A Knowledge-based Approach of Connect-Four

Watson. An analytical computing system that specializes in natural human language and provides specific answers to complex questions at rapid speeds

Assessing Data Mining: The State of the Practice

Decision Generalisation from Game Logs in No Limit Texas Hold em

Roulette Wheel Selection Game Player

: Introduction to Machine Learning Dr. Rita Osadchy

Temporal Difference Learning in the Tetris Game

One Approach to Social Accounting for Social Enterprises

MAN VS. MACHINE. How IBM Built a Jeopardy! Champion x The Analytics Edge

KNOWLEDGE-BASED MONTE-CARLO TREE SEARCH IN HAVANNAH. J.A. Stankiewicz. Master Thesis DKE 11-05

SYMMETRIC FORM OF THE VON NEUMANN POKER MODEL. Guido David 1, Pearl Anne Po 2

Mobile App Design Project #1 Java Boot Camp: Design Model for Chutes and Ladders Board Game

comp4620/8620: Advanced Topics in AI Foundations of Artificial Intelligence

Mastering the Game of Go with Deep Neural Networks and Tree Search

How To Play The Math Game

One pile, two pile, three piles

The UCT Algorithm Applied to Games with Imperfect Information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

The Relationship between Artificial Intelligence and Finance

Account Management: A Deployment and Usability Problem Phillip Hallam- Baker VP & Principal Scientist, Comodo Group Inc.

Artificial Intelligence Beating Human Opponents in Poker

COMP-424: Artificial intelligence. Lecture 1: Introduction to AI!

Making Decisions in Chess

ANT COLONY OPTIMIZATION APPLIED TO AN AUTONOMOUS MULTIAGENT GAME

Heads-up Limit Hold em Poker is Solved

Transcription:

Machine Learning and Data Mining 10. Machine Learning in Games Luc De Raedt Thanks to Johannes Fuernkranz for his slides

Contents Game playing What can machine learning do? What is (still) hard? Various types of games Board games Card games Real-time games Some historical developments

Why Games? Games - ideal environment to test AI / ML systems Progress / performance can easily be measured Environment can easily be controlled

Machine Learning for Game Playing A long history, almost as old as AI itself Arthur Samuel Playing checkers - Damen (late 50 s, early 60 ) Several interesting ideas and techniques Now, chinook (without learning) - world champion

State of the art Solves Tic-tac-toe, 4 gewinnt, Go-Mo-Ku Endgames: chess (5 pieces), checkers (8) Worldchampion level Chess, checkers, backgammon, scrabble, Othello Human still much better Go, Shogi, Bridge, Poker

ML in games Learning the evaluation function For e.g. minimax Essentially reinforcement learning Discovering patterns From databases discover characteristic / winning patterns Modelling the opponent Given optimal strategy Find strategy that better fits the opponent.

MENACE (Michie, 1963)

MENACE (Michie, 1963) Learns Tic-Tac-Toe 287 boxes (1 for every board) 9 colors (for every position) Algorithm: Choose box according to position Choose pearl from box Take corresponding move Learning: O X X O Lost game -> keep pearls (negative reinforcement) Won game -> add extra pearl to boxes from which pearl was taken (positive reinforcement)

O X O O X O X to Move X X X Choose Box Take corresponding Move Select pearl

Arthur s Samuel Checkers Player Rote learning Learning by heart - memorizing Minimax - AlphaBeta

Minimax Search / KnightCap

Temporal difference learning

Backgammon Elements of chance TD-gammon (Tesauro) Very high level Changes in strategies of humans Why does it work? Deep search does not seem to be very useful (due to random aspects) Situations can be compactly represented using neural net and reasonable set of features

KnightCap (Baxter et al. 2000) Learns chess From 1650 Elo (beginner) to 2150 Elo (master player) in ca. 300 Internetgames Improvements wrt TD-Gammon: Integration of TD-learning with search Training against real opponents instead of against itself

Discovering patterns Database endgames Enormous endgame databases exist For certain combinations of pieces Optimal moves known (brute force) Known whether positions are won, lost, draw, how many moves Can they be compressed? Rules + exceptions more compact than database? Can they be turned into simple rules? Can we turn complex optimal strategies into simple but effective ones? Which properties of boards to take into account? Relational representations / engineering E.g., Quinlan, Alan Shapiro, Fuernkranz,

KRK: simplest endgame 25620 positions Won in 0-16 moves 2796 different positions 18 classes Learning classification rules Knowledge, relations 1457 rules, 1003 exceptions Not much gained

Relational / Logical representatoins krk(-1,d,4,h,5,g,5) Use information such as samediagonal samerow samecollumn attacks( ) Etc.

Discovering strategies Endgames are solved but hard to understand Even hard for grand masters (KQKR) Many books written on endgames Goal Find easy to understand strategies Perhaps not optimal, but easy to recall and follow

Difficult games for computers Go? Too many possible moves Too deep search would be necessary Intractable (big award to be gained) What about end-games? Go end-games (simplified) have been considered (E.g. Jan Ramon)

Modelling the opponent Key problem in games such as poker, bridge, For simple games, optimal strategy known (Nash- Equilibrium) Optimal: Random But not optimal against a player that always plays stone Modelling the opponent Trying to predict move of the opponent Or which move the opponent you will play Key to success for some games Cf. Poker (Jonathan Schaeffer)

Other types of games Adventure games, interactive games, current compute games Let s look at some examples QuickTime and a TIFF (LZW) decompressor are needed to see this picture.

(learning to survive) Digger QuickTime and a TIFF (LZW) decompressor are needed to see this picture. A key problem : representing the states, use of relations necessary

Real time games Robocup Components can be learned Using RL - e.g. the goalie How to tackle those? Problems Degrees of freedom Varying number of objects Continuous positions

Learning to fly Work by Claude Sammut et al. Behavioural cloning Trying to imitate the player Reinforcement learning Layered learning / bootstrapping

Financial Games Predicting exchange rates Daimler-Chrysler Predicting the stock market Many models Time series!

Games and ML A natural and challenging environment Several successes, a lot still to do Ideal topic for thesis / studien arbeit Merry Christmas and Happy New Year!!!