Determining the state of thexas hold em in almost to real time

Similar documents
Poker Vision: Playing Cards and Chips Identification based on Image Processing

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Morphological segmentation of histology cell images

Make and Model Recognition of Cars

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

Tracking of Small Unmanned Aerial Vehicles

In: Proceedings of RECPAD th Portuguese Conference on Pattern Recognition June 27th- 28th, 2002 Aveiro, Portugal

Speed Performance Improvement of Vehicle Blob Tracking System

More Local Structure Information for Make-Model Recognition

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

False alarm in outdoor environments

Jiří Matas. Hough Transform

A Study of Automatic License Plate Recognition Algorithms and Techniques

Build Panoramas on Android Phones

Demo: Real-time Tracking of Round Object

INTEGRITY CERTIFICATION REQUIREMENTS: PLAYING CARD STANDARDS FOR TABLE GAMES

Canny Edge Detection

Matt Cabot Rory Taca QR CODES

Visual Structure Analysis of Flow Charts in Patent Images

CS231M Project Report - Automated Real-Time Face Tracking and Blending

Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment

Mouse Control using a Web Camera based on Colour Detection

Analecta Vol. 8, No. 2 ISSN

Robot Perception Continued

Scanners and How to Use Them

How To Filter Spam Image From A Picture By Color Or Color

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

Colour Image Segmentation Technique for Screen Printing

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Automatic License Plate Recognition using Python and OpenCV

Image Processing Based Automatic Visual Inspection System for PCBs

Open-Set Face Recognition-based Visitor Interface System

Automatic Grocery Shopping Assistant

Doppler. Doppler. Doppler shift. Doppler Frequency. Doppler shift. Doppler shift. Chapter 19

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Augmented Reality Tic-Tac-Toe

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

Using Use Cases for requirements capture. Pete McBreen McBreen.Consulting

C# Implementation of SLAM Using the Microsoft Kinect

BCC Multi Stripe Wipe

EPSON SCANNING TIPS AND TROUBLESHOOTING GUIDE Epson Perfection 3170 Scanner

Face detection is a process of localizing and extracting the face region from the

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

MusicGuide: Album Reviews on the Go Serdar Sali

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

Morphological analysis on structural MRI for the early diagnosis of neurodegenerative diseases. Marco Aiello On behalf of MAGIC-5 collaboration

Joule Thief 3.0 Kit. June 2012, Rev Joule Thief 3.0

Machine vision systems - 2

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

(b) You draw two balls from an urn and track the colors. When you start, it contains three blue balls and one red ball.

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Automatic Traffic Estimation Using Image Processing

The Scientific Data Mining Process

Multi-Zone Adjustment

Online Play Segmentation for Broadcasted American Football TV Programs

COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION

Topographic Change Detection Using CloudCompare Version 1.0

High Level Describable Attributes for Predicting Aesthetics and Interestingness

Identification of TV Programs Based on Provider s Logo Analysis

Environmental Remote Sensing GEOG 2021

Tracking-Learning-Detection

3D Scanner using Line Laser. 1. Introduction. 2. Theory

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service

Signature Region of Interest using Auto cropping

Part-Based Recognition

calibrate confidence with Risk Management: The Critical Role of Measurement and Calibration WHITE PAPER SERVICES

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

Phases of the Moon. Preliminaries:

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

Automatic Detection of PCB Defects

Binary Image Scanning Algorithm for Cane Segmentation

Curriculum Design for Mathematic Lesson Probability

Which Shape? Student Task Core Idea 4 Geometry and Measurement

Face Recognition in Low-resolution Images by Using Local Zernike Moments

UNDERGROUND TONK LEAGUE

Taking Inverse Graphics Seriously

Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies

MAVIparticle Modular Algorithms for 3D Particle Characterization

Colorado School of Mines Computer Vision Professor William Hoff

Golf Swing. Golf Swing Mechanics. Address Position. Posture To position the upper body correctly you must have the proper stance as described above.

Signature verification using Kolmogorov-Smirnov. statistic

A Dynamic Approach to Extract Texts and Captions from Videos

SIGNATURE VERIFICATION

Document Image Processing - A Review

MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr

Optical Tracking Using Projective Invariant Marker Pattern Properties

Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010

::pcl::registration Registering point clouds using the Point Cloud Library.

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

The use of computer vision technologies to augment human monitoring of secure computing facilities

siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service

ADVANCES IN AUTOMATIC OPTICAL INSPECTION: GRAY SCALE CORRELATION vs. VECTORAL IMAGING

Blood Vessel Classification into Arteries and Veins in Retinal Images

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS

Bayesian Tutorial (Sheet Updated 20 March)

Transcription:

Determining the state of thexas hold em in almost to real time David Molin, Tomas Forsyth Rosin Abstract This paper will describe a method which finds and determines suit and rank of playing cards in an image, the image processing pipeline developed in this paper is developed in such a way that it should be possible to make it run on an mobile device in close to real time. The resulting pipeline managed to find and correctly classify images with an accuracy of 66%. I. INTRODUCTION The problem of automatically detecting suit and rank of playing cards based on a stream of images could potentially be used for several commerical or non-commerical applications. One such application could be to automatically determine the cards on the table for broadcasting live poker tournaments. Generally when attempting to match objects with a known template in images it is possible to use keypoint detectors such as SIFT [1] or ORB [2] and match these keypoints and do a geometric consistency check through the use of RANSAC. However, due to the fact that the keypoints on poker cards would correspond to the corners of the suit symbols, the ratio test described in [1] would reject most of the matches as there are multiple of each suit symbol on each card. Fig. 1. Outline of the process used for detecting cards in this paper then from this position extract an image of the card oriented in an upright position with a known size. The other part of the algorithm is to find the suit and rank from an image of a card placed in an upright position. A. Extracting the card corners II. PRIOR WORK Some prior work in this field has been done. For example, [3] used optical character recognition in order to determine the suit and rank of playing cards. In 2007 [7] used template matching of the symbols in the corners of the cards to determine the rank of playing cards. In the field of image processing there has been several papers published which describes methods which are possible to use for card recognition. In 1995 Kiryati et. al. proposed a method to compute an approximate hough transform by only consider a subset of the points in the image[9]. In 2000 Matas et. al. made improvements to the probabilistic hough transform[8]. Keypoint and feature detection such as SIFT [1] and [2] has also been developed the last decade and these methods can be used to find instances of objects in an image. In 1979, Nobuyuki Otsu developed a method to find a threshhold which could separate bright portions of an image from dark portions[5]. III. METHOD The process used in this paper for extracting the suit and rank of all cards in an image can be divided into two parts. The first part is extracting position of the corners of all cards and dmolin@stanford.edu tomfo@stanford.edu Fig. 2. Pipeline for identifying the cards and finding their corners First the assumption that the cards are considerably brighter than the background is made. This is usually true since poker cards are pieces of white paper and the table which poker is played on is usually covered with a green tablecloth. This motivates why it should be possible to extract the poker cards from the background by using Otsu s method. Once Otsu s method has extracted a mask of the cards it is possible to extract the contours of the card by applying the method described in [6]. A card will have a contour consisting of 4 lines. Since playing cards are small and the distance to the playing cards are much larger than the size of the card there will be pairs of almost parallel lines for the contours of the cards. This structure can be used by finding the parallel lines, using a Hough transform. In practice, the Hough transform is too slow for almost real time applications, therefore the progressive probabilistic Hough transform described in [8] is used. This will give results similar to the one of the original Hough transform, but using less computation. From the hough transform it is possible to find the endpoints of long parallel

lines. These points will usually correspond to the corners of the cards, some false positives might result from this but these can then be rejected at a later stage in the pipeline. Once these corners are found it is trivial to find and apply an projective transformation which maps the cards to a known shape. B. Finding rank and suit of a card on each nonface card. Each blob will correspond to one suit symbol. In order to supress noise, the blobs would have to be larger than a given threshhold, otherwise the small symbols in the corner of the cards as well as small blobs created by noise would contribute to the rank of the card. The blobs are found by applying otsu s method on the grayscale versions of the cards. A. Test set IV. RESULTS Fig. 3. Pipeline for determining the rank and suit of a card 1) Determining suit: The color of the card can be extracted by comparing the red channel with some other color channel. For red cards such as diamonds and hearts, the red channel will assume large values at all points on the card, this will not however be the case for black cards such as spades or clubs. This fact can then be used to determine if the suit of a card is red or black. Since all cards have the suit symbol in the upper left corner it is possible to extract the suit by doing template matching in this region. 2) determining if a card is a face or nonface card: Furthermore it is possible to notice that the appearence of the face cards and nonface cards are quite different, the face cards consist of a large illustration suggesting that keypoint matching such as ORB or SIFT could be used to match these cards by comparing the extracted keypoints to the query card. This suggests that it should be a good idea to compute the rank for face cards and nonface cards in different ways. In order to determine whether a card is a face card or a nonface card the fact that face cards consist of one large connected blob, while nonface cards consist of several small blobs is used. Blobs are found by using Otsu s method. Then the size of the largest blob in the image is found. If this size is larger than some predetermined threshhold the image is classified as a face card. 3) classifying face cards: When doing keypoint matching for the face cards, some issues might arise for the ratio test in [1] since the bottom half of a face cards is a mirrored copy of the top half and therefore each feature would have a corresponding feature from the mirrored part of the card. This should however not be too much of an issue even if ignored and can be solved by instead of using the third closest feature for the ratio test instead of the second closest feature. 4) classifying nonface cards: For nonface cards keypoints matching would be unsuitable due to the fact that keypoints would mainly be found for the suit symbols and these occur several times on each card and therefore the mathes would not be good. Instead it is possbile to count the number of blobs Fig. 4. 4 of the 60 images used to test the detector The method for determining the suit and rank of the cards described in the previous section, is applied to a test set of 20 images, each containing 3 cards taken in an indoor environment with conditions similar to the ones for a real pokertable. In other words, clutter will be present and the images will not be close ups of poker cards taken straight from above. Some photographs will include motion blur as this will generally be present when a mobile device is used for determining cards in real time. Real class Card Non card Card 54 0 Detected class NonCard 6 - TABLE I CONFUSION MATRIX OF THE CARDS DETECTED BY THE ALGORITHM The confusion matrix shows that the results for detecting the cards is good. Using the method described earlier on the test set the accuracy for suit detection is 83% for all of the detected cards. The main problem in suit detection seems to be that clubs are detected as spaces as this happened in 7 out of 9 cases where the suit was incorrectly detected. The correct rank is detected for 92.5% of the detected cards. The detected rank was usually off by one to the correct rank of the card. The accuracy of the whole pipeline is 66%. B. Undetected cards In the test set, 10% of the cards were not detected at all. One example of that is shown in figure 5, where the 4 of Clubs is not detected. If the card is not outlined properly, the window used for counting the number of blobs in the center of the card might be too small to detect large enough blobs.

On the 4 of Clubs, all blobs are spread out, which might lead to no blobs counted at all and results in a non-card output. Another example is shown in figure 6. To avoid false positives from for example the backside of a card, there is a threshold on the minimum number of matching keypoints required to be detected as a face card at all. In this case, the Queen of Diamonds reached too few matching keypoints, and is instead recognized as a non-card. Fig. 7. Blurry Clubs card, detected as a Spade. Fig. 5. An example of an undetected card due to the narrow search window. Fig. 8. Sharp Clubs card, detected as a Spade. Another incorrect suit classification is shown in figure 9. The 5 of Hearts, is detected as the 5 of Diamonds. In this case, the blur has made the corner suit symbols warped, which leads to the error. However, the two other cards are even more blurred, but are still correctly classified. Fig. 6. An example of an undetected card due to keypoint threshold. C. Wrong suit The similarity between the shape of Clubs and Spades are a great cause of incorrect suit classifications. In figure 7, the 7 of Clubs is detected as the 7 of Spade. It might seem that this is because of the blur, making the classification harder, but this is not always the case. In figure 8, the card is captured sharply, and still it is detected incorrectly. D. Wrong rank Cases of incorrectly classified ranks of cards are significantly more rare than in the case of suits. The most common case is that too much of the surroundings of the card is included in the detected area, which leads to either a corner pushed into the search window, or a part of the background detected as a blob. An example of this is shown in figure 10, where the 3 of Hearts is detected as a 4. There was one incorrect classification of a face card in the test image set. Figure 11 shows a King detected as a 10. This is because the size of the largest blob in the binarized version

Fig. 9. Blurry cards, both correctly and incorrectly detected. Fig. 11. A face card, classified as a non-face card. of the card was too small to be a face card. If the number of blobs exceeds 10, the card is classified as a 10. Another card in the image is tilted 90, but still correctly classified. the corner detection step in the algorithm and the template matching step. A. Future work Future work could include to find a way to make this method work with occluded cards and for cases where the cards are placed on top of each other. It should also be possible to improve the results by making a better corner detector as the corners are not perfectly detected by the current pipeline which means that the recognition step gets a slightly warped card as an input. One of the major reasons for why the suit detection was bad was that clubs were often identified as spades and if a better method for discriminating inbetween spades and clubs the results would improve significantly. Since the cards used in the test set were reflective, some cards in the test set had specular highlights, this problem is similar to occusion since it was not possible to determine the actual colors of the cards at these points. Fig. 10. A bad card detection precision, leading to a miscount of blobs. V. DISCUSSION As can be seen in the results section this approach gives pretty good results. They are slightly worse than the results in [3] which had a accuracy of 93%. However, it is hard to compare these results directly since this paper did not use the same test set as[3]. In order to be able to use a card detector in practice it should have a accuracy which is > 98%. For lower accuracies than this errors will be so common that it is neccesary to have a human for correcting the error of the card analyzer. Although the error of this approach is too high for practical applications, it might be possible to improve the results to an acceptable rate by increasing the accuracy of VI. CONCLUSION The method described in this paper managed to classify playing cards, while still being resistant to clutter as well as perspective transformations. The accuracy of the method is currently quite low, but by improving some minor steps in the algorithm, it might be possible to get significantly better results. REFERENCES [1] Lowe, David G. Distinctive image features from scale-invariant keypoints. International journal of computer vision 60.2 (2004): 91-110. [2] Rublee, Ethan, et al. ORB: an efficient alternative to SIFT or SURF. Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011. [3] Martins, Paulo, Lus Paulo Reis, and Lus Tefilo. Poker vision: playing cards and chips identification based on image processing. Pattern Recognition and Image Analysis. Springer Berlin Heidelberg, 2011. 436-443. [4] Matas, Jiri, Charles Galambos, and Josef Kittler. Robust detection of lines using the progressive probabilistic hough transform. Computer Vision and Image Understanding 78.1 (2000): 119-137.

[5] Otsu, Nobuyuki. A threshold selection method from gray-level histograms. Automatica 11.285-296 (1975): 23-27. [6] Suzuki, Satoshi. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing 30.1 (1985): 32-46. [7] Zheng, Chunhui, and Richard Green. Playing card recognition using rotational invariant template matching. Proc. of Image and Vision Computing New Zealand. 2007. [8] Matas, Jiri, Charles Galambos, and Josef Kittler. Robust detection of lines using the progressive probabilistic hough transform. Computer Vision and Image Understanding 78.1 (2000): 119-137. [9] Kiryati, Nahum, Yuval Eldar, and Alfred M. Bruckstein. A probabilistic Hough transform. Pattern recognition 24.4 (1991): 303-316. CARD RECOGNITION