Determining optimal window size for texture feature extraction methods



Similar documents
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

" {h k l } lop x odl. ORxOS. (2) Determine the interplanar angle between these two planes:

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Environmental Remote Sensing GEOG 2021

A Fast Algorithm for Multilevel Thresholding

MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr

A Learning Based Method for Super-Resolution of Low Resolution Images

A New Two-Scan Algorithm for Labeling Connected Components in Binary Images

A Stock Pattern Recognition Algorithm Based on Neural Networks

Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features

Face detection is a process of localizing and extracting the face region from the

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

Signature Segmentation from Machine Printed Documents using Conditional Random Field

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

Character Image Patterns as Big Data

Morphological segmentation of histology cell images

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

High-dimensional labeled data analysis with Gabriel graphs

Mean-Shift Tracking with Random Sampling

Class-Specific, Top-Down Segmentation

2. MATERIALS AND METHODS

Regularized Logistic Regression for Mind Reading with Parallel Validation

Medical Image Segmentation of PACS System Image Post-processing *

5. Binary objects labeling

Subspace Analysis and Optimization for AAM Based Face Alignment

Combating Anti-forensics of Jpeg Compression

Tracking and Recognition in Sports Videos

Texture. Chapter Introduction

Big Data & Scripting Part II Streaming Algorithms

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

How To Filter Spam Image From A Picture By Color Or Color

Feature Selection vs. Extraction

Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang

Norbert Schuff Professor of Radiology VA Medical Center and UCSF

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER

Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006

Tracking Groups of Pedestrians in Video Sequences

Robust and accurate global vision system for real time tracking of multiple mobile robots

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Computer Vision for Quality Control in Latin American Food Industry, A Case Study

Class-specific Sparse Coding for Learning of Object Representations

A secure face tracking system

A Robust Method for Solving Transcendental Equations

Two-Frame Motion Estimation Based on Polynomial Expansion

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals


Biometric Authentication using Online Signatures

Automatic Recognition Algorithm of Quick Response Code Based on Embedded System

RUN-LENGTH ENCODING FOR VOLUMETRIC TEXTURE

Data Mining: A Preprocessing Engine

COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION

STATISTICA. Clustering Techniques. Case Study: Defining Clusters of Shopping Center Patrons. and

Multimodal Biometric Recognition Security System

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

An Automatic and Accurate Segmentation for High Resolution Satellite Image S.Saumya 1, D.V.Jiji Thanka Ligoshia 2

Real-time Traffic Congestion Detection Based on Video Analysis

A Complete Gradient Clustering Algorithm for Features Analysis of X-ray Images

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Calculation of Minimum Distances. Minimum Distance to Means. Σi i = 1

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

3D Model of the City Using LiDAR and Visualization of Flood in Three-Dimension

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Establishing the Uniqueness of the Human Voice for Security Applications

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Automatic Building Facade Detection in Mobile Laser Scanner point Clouds

Lecture 6: Classification & Localization. boris. ginzburg@intel.com

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

SUPERVISED AUTOMATIC HISTOGRAM CLUSTERING AND WATERSHED SEGMENTATION. APPLICATION TO MICROSCOPIC MEDICAL COLOR IMAGES

Analecta Vol. 8, No. 2 ISSN

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Introduction. Chapter 1

Relative Permeability Measurement in Rock Fractures

Data Preprocessing. Week 2

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification

High Quality Image Magnification using Cross-Scale Self-Similarity

Standardization and Its Effects on K-Means Clustering Algorithm

Maximum Likelihood Estimation of ADC Parameters from Sine Wave Test Data. László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár

The application of image division method on automatic optical inspection of PCBA

MetaMorph Software Basic Analysis Guide The use of measurements and journals

Colorado School of Mines Computer Vision Professor William Hoff

ENS 07 Paris, France, 3-4 December 2007

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Euler Vector: A Combinatorial Signature for Gray-Tone Images

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Vision based Vehicle Tracking using a high angle camera

Transcription:

IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec Puig Miguel Angel García Intelligent Robotics and Computer Vision Group Department of Computer Science and Mathematics Rovira i Virgili University Ctra. Salou s/n, 43006 Tarragona, Spain {dpuig, magarcia}@etse.urv.es Abstract This paper presents a technique to determine optimal window sizes for the computational methods utilized to extract features from textured images. Those features are useful for both texture classification and segmentation. Window sizes are determined in order to outperform two complementary tasks, namely, texture feature evaluation and texture segmentation, which have opposite requirements. Thus, while texture feature evaluation requires rather large windows in order to obtain meaningful descriptions of their content, texture segmentation requires small windows in order to accurately locate the boundaries between different textured regions. The proposed technique has been applied to several well-known computational methods that are frequently utilized for feature extraction from textured images. Keywords: texture feature extraction, texture segmentation, computational methods, optimal window size. 1 Introduction Texture is an important cue for many computer vision tasks, such as image classification and segmentation. Therefore, it is not surprising the amount of work that has already been done in order to obtain good computational measures that allow the proper description of texture features (e.g., [2], [5], [8], [11]). Texture segmentation consists of splitting an image into regions of uniform texture. This task is usually performed by applying two stages. The first stage (texture feature extraction) extracts features that characterize each texture in some way. The second stage (texture segmentation) utilizes the previous features to determine uniform regions that allow the segmentation of the image. However, the quality of the final segmentation greatly depends on the size of the regions (windows) that are analyzed by both stages. The majority of texture segmentation algorithms that have been proposed in the literature define those sizes experimentally and do not provide any clear insight on how the chosen size may affect the final result. Hence, determining a suitable size for those windows is an open problem, since both stages have contradictory requirements as it is shown below.

The main purpose of texture feature extraction is to obtain relationships among the pixels that belong to a similar texture, such as spatial gray level dependences. These relationships allow to distinguish every distinctive texture from the others [10]. Usually, texture feature extraction methods are locally applied to every pixel of the input image by evaluating some type of difference among neighboring pixels through small square windows that overlap over the entire image. The result obtained for each window is assigned as a feature value to the center pixel of that window. In order to obtain a good texture characterization, it is desirable to work with large windows, since they obviously contain more information than small ones. On the other hand, texture segmentation aims at separating the different uniform regions that constitute an input image by taking texture similarity into account. Finding precise localizations of boundary edges between adjacent regions is a fundamental goal for the segmentation task, and can only be ensured with relatively small windows. Therefore, good texture feature extraction requires large windows, while precise boundary localization demands small ones. nce both tasks must be applied in order to segment textured images, a certain trade-off regarding window size must be made. Many studies have been done about the performance of various families of computational methods for texture feature extraction [9], [10], but few studies exist about the optimal sizes of the windows utilized by those methods: [1], [3] and [7]. A recent work [4] analyzes the role played by both the shape and size of those windows, showing that texture characterization is much more influenced by the window size than by its shape, although no hints on optimal sizes are provided. This paper presents a technique for determining the optimal size of the windows utilized by texture feature extraction methods in order to maximize both their discrimination and segmentation capabilities. This technique is described in Section 2. Section 3 presents the evaluation of that technique upon various texture feature extraction methods that are widely used for texture analysis and segmentation. Conclusions and future research lines are given in Section 4. 2 Optimal window size for a texture feature extraction method A texture feature extraction method is a computational method that analyzes the pixels contained in a certain region (window) of an input image and generates a single value that represents the contents of that window. An ideal texture feature extraction method should be capable of generating different values for different textures. Unfortunately, there are no computational methods that behave in that way. Thus, the discrimination capabilities of a texture feature extraction method depend on the principles of the method and, to a large extent, on the window size and the different textures being analyzed. Therefore, in order to determine the optimal window size for a computational method, it is necessary to define a specific set of both textures and window sizes. This section describes an algorithm for determining the optimal window size for a texture feature extraction method in order to maximize both its discrimination and segmentation capabilities with respect to T different classes of texture, considering N different window sizes: S 1 S 1,, S N S N. Every texture class T i is supposed to be

represented by a gray level image I i that does not contain any other textures. The proposed algorithm determines the smallest window size that maximizes the discrimination capabilities of the given computational method. In this way, that method will be applicable to both the texture feature extraction and segmentation stages, which have contradictory size requirements as discussed above. The proposed algorithm proceeds as follows. The first window size S 1 S 1 is chosen. Every image I i is then convolved by using windows of that size. That convolution consists of applying the computational method to the pixels contained in the window, which is displaced from left to right and top to bottom in one pixel steps until the whole image has been scanned. The results after applying the computational method to the different window locations are recorded. After processing the T images that represent the textures of interest, both the minimum, MIN, and maximum, MAX, values returned by the computational method are determined. All the computed values are then linearly transformed by mapping the interval [MIN, MAX] to [0, 255]. Afterwards, a normalized histogram H i of the transformed values associated with every image I i is obtained. Each histogram models a discrete probability density function (PDF) that represents the behavior of the computational method for the corresponding texture. Ideally the PDFs corresponding to two different textures should not overlap, indicating that the computational method is capable of distinguishing both textures. Thus, the amount of overlap between two PDFs can be used to represent the discrimination power of the method for the two given textures and the current window size. If two PDFs that do not overlap are subtracted, the area of the result will be equal to 2. Alternatively, the area of the subtraction of two fully overlapped PDFs is 0. Therefore, an area equal to 2 represents a discrimination percentage of 100%, while an area equal to 0 represents a discrimination percentage of 0%. j A discrimination percentage for every pair of images is obtained in this way, DP, 1 j K, with K being the number of combinations without repetition of order 2 of T elements: K = T! ( 2! ( T 2)! ). The average of these K percentages is also computed: DP. The process described so far is applied to the N window sizes that are being analyzed. The goal of the proposed algorithm consists of finding the minimum window size that leads to maximum discrimination among the different textures. nce a discrimination of 100% is unrealistic given the nature of the texture feature extraction methods that have been proposed in the literature, it is necessary to define a threshold θ that determines when two textures can be considered to be well discriminated (e.g.: θ = 95% ). At this point, the algorithm has already computed, for every pair of textures, the discrimination percentage of the computational method by considering each window size. nce the final goal is to choose the minimum window size that maximizes the discrimination capabilities of the method, the algorithm proceeds by finding out, for every pair of images, the smallest window size whose discrimination percentage is above the threshold θ. Every time such a window size is found, a counter associated with it is increased. After the K image pairs have been considered, every window size S i will be associated with a number of votes η that represents how many times that size was the smallest one to produce good texture discrimination (above θ). nce K different image pairs inter-

d3 d15 d32 d37 d41 d5 d91 d94 Figure 1. Sample images from the Brodatz album with 256x256 pixels. vene in the voting process and every pair may contribute with a single vote at most if the discrimination percentages associated with an image pair are all below the given threshold, that image pair does not contribute with any votes, the addition A of all the votes will be less or equal to K. Finally, those votes are normalized by linearly mapping them to the interval [0, 100] the minimum number of votes is mapped to 0 and the maximum to 100 and then by multiplying them by the correction coefficient A K. The latter penalizes the results of the voting process when there are image pairs that have not contributed to it. In the end, the algorithm generates N normalized votes, { S1,, SN }, and N average discrimination percentages, { DP S1,, DP SN }, one for each window size. In order to choose the best window size, a classical multicriterion decision technique is applied by considering that the different window sizes correspond to the available alternatives and both the normalized votes and discrimination percentages are the criteria that will serve to evaluate the utility of those alternatives. The expected utility corresponding to each alternative (size) S i is obtained by averaging its two associated criteria, DP and. The window size with the largest expected utility is finally selected as the optimal one for the given computational method, as it is the smallest size that produces the largest discrimination. 3 Experimental results The proposed technique has been evaluated upon three families of texture feature extraction methods that are widely used for texture analysis and segmentation: first and second order statistical methods and operator-based methods [10]. This paper evaluates the proposed technique upon four of those methods: standard deviation and skewness [5] (first order statistics), contrast [5] (second order statistic) and the R5R5 Laws operator [6]. Seven different window sizes have been tested: 2x2, 4x4, 8x8, 16x16, 32x32, 64x64 and 128x128. Those sizes have been chosen to be powers of two since some computational methods from the previous families only allow windows of that size. Eight different texture images obtained from the Brodatz album (Fig. 1) have been utilized. Table 1 shows the results after applying the proposed technique to the four computational methods evaluated in this paper. For every method, the table shows the normalized number of votes,, the average discrimination percentage, DP, and the expected utility, by considering the tested window sizes and a discrimination threshold θ of 95%. The optimal window size (the one with the largest expected utility) is highlighted. In order to validate the optimal window sizes determined by the proposed algorithm,

Contrast Skewness Deviation Laws R5R5 DP DP DP DP S 1 =2 S 2 =4 S 3 =8 S 4 =16 S 5 =32 S 6 =64 S 7 =128 38.9 50.5 67.9 82.7 87.7 92.5 96.1 0 23.2 92.9 81.3 23.2 46.4 34.8 19.5 36.9 80.4 82 55.5 69.5 65.5 51.5 61.1 65.7 70.7 78.2 86.3 95.4 0 11.2 22.4 33.7 44.9 78.6 56.1 25.8 36.2 44.1 52.2 61.5 82.4 75.8 43.4 41.6 56.2 68.9 79 86.8 94.7 0 0 29.1 19.4 67.9 29.1 38.8 21.7 20.8 42.6 44.1 73.4 57.9 66.7 45.5 55 56.6 56.2 51.7 55.9 55.2 0 0 0 0 10.7 10.7 10.7 22.7 27.5 28.3 28.1 31.2 33.3 32.9 Table 1: Average discrimination percentages, normalized votes and expected utilities for each computational method and different window sizes. S 1 =2 S 2 =4 S 3 =8 S 4 =16 S 5 =32 S 6 =64 S 7 =128 Laws R5R5 39 50.4 65.7 97.7 96.8 90.6 63.2 Deviation 64.7 56.7 71.7 72.2 79.5 83.8 52.6 Skewness 43.8 50.1 69.6 88.7 96.5 78.9 65.3 Contrast 41.5 45.3 54.1 48.5 57.3 67.5 59.1 Table 2: Segmentation tests. each pair of texture images was merged into a single test image. The pixels of each test image were then classified into one of the two component textures by applying a classical statistical decision theory method that takes into account the outcome of each computational method for each window size. In particular, given a specific computational method and a window size, that method is applied to every pixel of the test image. The outcome of the method is then normalized and mapped to one of the entries of the histograms associated with the component textures (these histograms are described in section 2). The value associated with the chosen entry at histogram H i represents the probability that the analyzed pixel belongs to texture T i. That pixel is finally classified as belonging to the texture whose probability is the largest. Table 2 shows the percentage of pixels that were correctly classified by applying the aforementioned procedure. The maximum percentages correspond to the window sizes that were chosen by the proposed technique.

4 Conclusions A new technique has been proposed for determining the minimum window size of a texture feature extraction method in order to maximize its discrimination and segmentation capabilities given a particular set of texture models. The algorithm applies a multicriterion decision technique in order to select the best window size based on two criteria: normalized number of votes and average discrimination percentage, which are determined by subtracting discrete probability density functions that are obtained after applying the given computational method to all the pixels of each texture model. The results have been validated by applying a classical statistical decision method in order to segment test images that contain pairs of texture models. The window sizes obtained with the proposed technique are the ones that produce the largest percentage of correctly classified pixels. Further work will consist of the evaluation of the behavior of the proposed technique upon more texture feature extraction methods, such as Gabor filters [8]. A criterion for determining the best possible discrimination percentage threshold θ will also be devised. References [1] D. Blostein and N. Ahuja. Shape from Texture: Integrating Texture-Element Surface Estimation. IEEE Trans. PAMI, 2(12): 1233-1251, 1989. [2] K.I. Chang, K.W. Bowyer and M. vagurunath. Evaluation of Texture Segmentation Algorithms. Proc. IEEE CVPR, Fort Collins (USA), 1999. [3] H.A. Cohen and J. You. The segmentation of images of unknown scale using multiscale texture tuned masks. Int. Conf. on Image Processing, 726-729, ngapore, 1992. [4] P. García-Sevilla and M. Petrou. Analysis of Irregularly Shaped Texture Regions: A Comparative Study. 15th IAPR Int. Conf. on Pat. Recog., 1080-1083, Barcelona, 2000. [5] R.M. Haralick, K. Shanmugam and I. Distein. Textural Features for Image Classification. IEEE Trans. SMC, 6(3): 610-622, 1973. [6] K.I. Laws. Textured Image Segmentation. Technical Report USCIPI-940, U. Southern California, Los Angeles, 1980. [7] S.Novianto et al. Multiwindowed Approach to the Optimum Estimation of the Local Fractal Dimension for Natural Image Segmentation. Int. Conf. Im. Proc., Japan, 1999. [8] G. Pok and J.C. Liu. Unsupervised Texture Segmentation Based on Histogram of Encoded Gabor Features and MRF Model. Int. Conf. Im. Proc., Japan, 1999. [9] T. Randen and J.H. Husoy. Filtering for Texture Classification: A Comparative Study. IEEE Trans. PAMI, 21(4): 291-310, 1999. [10] T.R. Reed and J.M. Hans du Buf. A Review of Recent Texture Segmentation and Feature Extraction Techniques. CVGIP: Image Understanding, 57(3): 359-372, 1993. [11] J.C. Weszka, C.R. Dyler and A. Rosenfeld. A Comparative Study of Texture Measures for Terrain Classification. IEEE Trans. SMC, 6: 269-285, 1976.