Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features



Similar documents
An Assessment of the Effectiveness of Segmentation Methods on Classification Performance

Supervised Classification workflow in ENVI 4.8 using WorldView-2 imagery

Environmental Remote Sensing GEOG 2021

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

Research On The Classification Of High Resolution Image Based On Object-oriented And Class Rule

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

Automatic Change Detection in Very High Resolution Images with Pulse-Coupled Neural Networks

DEVELOPMENT OF A SUPERVISED SOFTWARE TOOL FOR AUTOMATED DETERMINATION OF OPTIMAL SEGMENTATION PARAMETERS FOR ECOGNITION

Object-Oriented Approach of Information Extraction from High Resolution Satellite Imagery

Pixel-based and object-oriented change detection analysis using high-resolution imagery

Spectral Response for DigitalGlobe Earth Imaging Instruments

SAMPLE MIDTERM QUESTIONS

Extraction of Satellite Image using Particle Swarm Optimization

Generation of Cloud-free Imagery Using Landsat-8

Digital image processing

WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS

The Scientific Data Mining Process

RESOLUTION MERGE OF 1: SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY

Mapping coastal landscapes in Sri Lanka - Report -

DETECTION OF URBAN FEATURES AND MAP UPDATING FROM SATELLITE IMAGES USING OBJECT-BASED IMAGE CLASSIFICATION METHODS AND INTEGRATION TO GIS

Accurate and robust image superresolution by neural processing of local image representations

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Visualization of large data sets using MDS combined with LVQ.

RULE INHERITANCE IN OBJECT-BASED IMAGE CLASSIFICATION FOR URBAN LAND COVER MAPPING INTRODUCTION

Multinomial Logistics Regression for Digital Image Classification

THEMATIC ACCURACY ASSESSMENT FOR OBJECT BASED CLASSIFICATION IN AGRICULTURE AREAS: COMPARATIVE ANALYSIS OF SELECTED APPROACHES

UPDATING OBJECT FOR GIS DATABASE INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES: A CASE STUDY ZONGULDAK

Classification of High-Resolution Remotely Sensed Image by Combining Spectral, Structural and Semantic Features Using SVM Approach

Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

Classification-based vehicle detection in highresolution

Wavelet based Marker-Controlled Watershed Segmentation Technique for High Resolution Satellite Images

A System of Shadow Detection and Shadow Removal for High Resolution Remote Sensing Images

Geospatial intelligence and data fusion techniques for sustainable development problems

AUTOMATIC BUILDING DETECTION BASED ON SUPERVISED CLASSIFICATION USING HIGH RESOLUTION GOOGLE EARTH IMAGES

Land Use/Land Cover Map of the Central Facility of ARM in the Southern Great Plains Site Using DOE s Multi-Spectral Thermal Imager Satellite Images

An Automatic and Accurate Segmentation for High Resolution Satellite Image S.Saumya 1, D.V.Jiji Thanka Ligoshia 2

PROBLEMS IN THE FUSION OF COMMERCIAL HIGH-RESOLUTION SATELLITE AS WELL AS LANDSAT 7 IMAGES AND INITIAL SOLUTIONS

OBJECT ORIENTED ANALYSIS AND SEMANTIC NETWORK FOR HIGH RESOLUTION IMAGE CLASSIFICATION

The Role of SPOT Satellite Images in Mapping Air Pollution Caused by Cement Factories

Analecta Vol. 8, No. 2 ISSN

ScienceDirect. Brain Image Classification using Learning Machine Approach and Brain Structure Analysis

Sub-pixel mapping: A comparison of techniques

Cafcam: Crisp And Fuzzy Classification Accuracy Measurement Software

FUSION OF INSAR HIGH RESOLUTION IMAGERY AND LOW RESOLUTION OPTICAL IMAGERY

AUTOMATIC CROWD ANALYSIS FROM VERY HIGH RESOLUTION SATELLITE IMAGES

Traffic Monitoring using Very High Resolution Satellite Imagery

Digital Classification and Mapping of Urban Tree Cover: City of Minneapolis

Analysis of Landsat ETM+ Image Enhancement for Lithological Classification Improvement in Eagle Plain Area, Northern Yukon

Calculation of Minimum Distances. Minimum Distance to Means. Σi i = 1

INTRA-URBAN LAND COVER CLASSIFICATION FROM HIGH-RESOLUTION IMAGES USING THE C4.5 ALGORITHM

Data Mining and Neural Networks in Stata

Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems

Review for Introduction to Remote Sensing: Science Concepts and Technology

Determining optimal window size for texture feature extraction methods

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

Big Data: Rethinking Text Visualization

SATELLITE IMAGERY CLASSIFICATION WITH LIDAR DATA

THE OBJECT-BASED APPROACH FOR URBAN LAND USE CLASSIFICATION USING HIGH RESOLUTION SATELLITE IMAGERY (A CASE STUDY ZANJAN CITY)

Introduction to Pattern Recognition

Adaptive fusion of ETM+ Landsat imagery in the Fourier domain

Visualization of Breast Cancer Data by SOM Component Planes

Assessing Hurricane Katrina Damage to the Mississippi Gulf Coast Using IKONOS Imagery

RUN-LENGTH ENCODING FOR VOLUMETRIC TEXTURE

APPLICATION OF MULTITEMPORAL LANDSAT DATA TO MAP AND MONITOR LAND COVER AND LAND USE CHANGE IN THE CHESAPEAKE BAY WATERSHED

Automated detection and segmentation of vine rows using high resolution UAS imagery in a commercial vineyard

Big data and Earth observation New challenges in remote sensing images interpretation

CLASSIFICATION ACCURACY INCREASE USING MULTISENSOR DATA FUSION

2002 URBAN FOREST CANOPY & LAND USE IN PORTLAND S HOLLYWOOD DISTRICT. Final Report. Michael Lackner, B.A. Geography, 2003

Data topology visualization for the Self-Organizing Map

Understanding Raster Data

ENVI THE PREMIER SOFTWARE FOR EXTRACTING INFORMATION FROM GEOSPATIAL IMAGERY.

An Approach for Utility Pole Recognition in Real Conditions

DETECTION, SEGMENTATION AND CHARACTERISATION OF VEGETATION IN HIGH-RESOLUTION AERIAL IMAGES FOR 3D CITY MODELLING

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING /$ IEEE

Self Organizing Maps: Fundamentals

Object-based classification of remote sensing data for change detection

CROP CLASSIFICATION WITH HYPERSPECTRAL DATA OF THE HYMAP SENSOR USING DIFFERENT FEATURE EXTRACTION TECHNIQUES

MODIS IMAGES RESTORATION FOR VNIR BANDS ON FIRE SMOKE AFFECTED AREA

Stabilization by Conceptual Duplication in Adaptive Resonance Theory

COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION

KEYWORDS: image classification, multispectral data, panchromatic data, data accuracy, remote sensing, archival data

Classification of Engineering Consultancy Firms Using Self-Organizing Maps: A Scientific Approach

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Computer Vision for Quality Control in Latin American Food Industry, A Case Study

Land Cover Mapping of the Comoros Islands: Methods and Results. February ECDD, BCSF & Durrell Lead author: Katie Green

The Idiots Guide to GIS and Remote Sensing

Transcription:

Remote Sensing and Geoinformation Lena Halounová, Editor not only for Scientific Cooperation EARSeL, 2011 Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features Consuelo Gonzalo-Martín 1 and Mario Lillo-Saavedra 2 Universidad Politécnica de Madrid; Departamento de Arquitectura y Tecnología de Sistemas Informáticos, Boadilla del Monte, Spain; consuelo.gonzalo@upm.es Universidad de Concepción, Departamento de Mecanización y Energía, Chillán, Chile; malillo@udec.cl Abstract. Once admitted the advantages of object-based classification compared to pixel-based classification; the need of simple and affordable methods to define and characterize objects to be classified, appears. This paper presents a new methodology for the identification and characterization of objects at different scales, through the integration of spectral information provided by the multispectral image, and textural information from the corresponding panchromatic image. In this way, it has defined a set of objects that yields a simplified representation of the information contained in the two source images. These objects can be characterized by different attributes that allow discriminating between different spectral&textural patterns. This methodology facilitates information processing, from a conceptual and computational point of view. Thus the vectors of attributes defined can be used directly as training pattern input for certain classifiers, as for example artificial neural networks. Growing Cell Structures have been used to classify the merged information. Keywords. Object attributes, Object-based classification, Wavelet coefficients, Segmentation, Lacunarity, Growing Cell Structures. 1. Introduction Classification algorithms for multispectral remote sensing images can be categorized according different criteria. One of these is if the algorithm is pixel-based or object-based. When the first type of algorithms is applied to high-resolution remote sensing imagery, two main problems emerge. Very often, the pepper and salt effect appears in the classified image, due to the high variability of each spectral class in the original image. Furthermore, the limited number of spectral bands of such images results in a low separability between land covers [1], [2]. That means the thematic classes can not be discriminated through the multispectral image information. On the other hand, it is well known that the human visual system is not based on individual elements (pixel), but on the perception of objects characterized by different attributes (size, shape, texture, color,...). This is one reason why it is widely accepted that classification strategies based on objects can reduce the disadvantages of the methods of classification pixel by pixel in high-resolution space imagery. In this type of processing, a critical step is the definition and characterization of objects, by clustering neighboring pixels with spectral and textural similar characteristics, in homogeneous and significant areas, from the standpoint of the end user. This clustering process, known as segmentation, must be adapted to the resolution and the scale of objects for each particular problem. Thus, in this paper, it is proposed a new methodology to carry out the definition of multi-scale objects. Homogeneous spectral objects are defined from the multispectral image and homogeneous textural objects from the Wavelet coefficients of the corresponding panchromatic image. This way to define the 394

homogenous textural objects supplies an objects characterization at different scales, one for each Wavelet Transformation level, which is very useful to process scenes with different kind of land covers. The integration of these two sets of objects gives a set of multi-scale spectrally and texturally homogeneous objects. They can be characterized by different attributes, allowing discrimination of different spatial&spectral patterns. Unsupervised artificial neural networks based on Growing Cell Structures (GCS) [3] have been used for objects classification. Vectors of attributes for each object have been used as training patterns input in GCS. 2. Methods and materials A scene of a Quickbird image, recorded on the February 18, 2005, in the region of O'Higgins, Peumo Valley, Chile (34º 18' 11'' S, 71º 19' 11'' W), has been used to illustrate the methodology proposed in this work. The panchromatic image size is 512x 512 pixels, corresponding to an area of 9.43 hectares. Fig. 1 a) shows a color composition (Near Infrared-Green-Blue) of the multispectral scene and Fig. 1 b), the panchromatic scene. Five spectral classes have been identified in this scene. Panchromatic and multispectral images have been geo-referenced each other to ensure a proper fit between them. a) b) Figure 1: Quickbird scene: (a) NIGB colour composite multispectral image; (b) panchromatic image. The proposed methodology does not require other pre-processing methods. Fig. 2 displays a scheme of the proposed methodology described below. The higher spatial resolution of the panchromatic image justifies its use for defining objects from a morphological point of view; however, these objects can be spectrally and texturally heterogeneous, at least at some scale. With the aim of avoiding the influence of the spectral information from the panchromatic image, the Wavelet-à trous transform [4] of this image has been generated, and the obtained coefficients segmented for defining homogeneous textural objects (textural objects). The segmentation algorithm is based on Otsu' method [5]. By this method it is possible to choose an optimal threshold by maximizing the variance between classes by means a search throughout the image. In addition, the use of Wavelet coefficients, for determining textural objects, allows defining them at different scales, one for each coefficient level. The scale or resolution level to use will depend on the particular image as well as the objective of the final application. For determining homogeneous spectral objects (spectral objects), the pair of spectral bands with lower correlation has been chosen in order to determine, in the corresponding scattergram, spectrally homogenous areas, which have been spatially mapped in the multispectral image (pre-classification). The integration of these two types of objects, spectral and 395

textural, yields objects with a spectral and textural particular behavior pattern. As a result of this process, n k (k = number of levels of Wavelet transform) objects have been obtained for each scale or level. Figure 2: Multiscale objects generation and characterization Once multi-scale objects have determined, they must be characterized by a set of textural and spectral attributes. These will depend on the scene under study and the application. In this way, each object is defined by the positions (row and column) of the pixels included in it, as well as an attributes vector, providing a joint and simplified representation of the information contained in both source images. The attributes considered in a first approach, are mean values and standard deviations of the gray levels of each object for each spectral band, as well as, mean values and standard deviations of Wavelet coefficients and Lacunarity of each object for different scales. Since Lacunarity is a measure of how data fills space [6], it complements Wavelet coefficients, which detects details in the image. Many other spectral and textural attributes can be used. This way of representing the information reduces the number of data to be processed from 1.310.720 (512x512x5) to 33.920, considering three scales and ten attributes. 396

2.1. Multiscale object-based classification Each attributes vector associated with an object has been used as a training pattern input into the GCS [7] used in this work for unsupervised classification. GCS is a two-layer architecture network. Neurons located at the input layer are fully connected with those in the output one. These connections have associated a weight, w ij, where i identify the input neuron and j the output one. There exist as many input neurons as dimension has the input vectors (attributes). Neurons in the output layer have neighborhood connections between them presenting a topology formed by groups of basic t-dimensional hyper-tetrahedrons structures. In this work the modification of the GCS training algorithm proposed in [4] has been used. The learning phase in GCS network adapts synaptic vectors looking for that each output neuron represents a group of similar input patterns. At the beginning of the training phase the output layer of the network has only three neurons interconnected via neighbor relations (t=2). During the learning process a set of input patterns is presented to the network iteratively. In each adaptation step an input pattern is processed, the synaptic vectors and its topological neighbor's synaptic vectors are modified, in order to slightly approach them to the pattern just processed. 3. Results The methodology described above and illustrated in Fig. 2, has been applied to images displayed in Fig. 1 (a and b). Wavelet coefficients à trous of panchromatic image have been generated for the first three levels. Then, textural objects have been generated by the Otsu method. Since this method allows limiting the minimum objects size, experiments with different values have been carried out (5, 25 and 50). Spectral objects have been generated by selecting five different homogeneous spectral regions in B3-B4 scattergram (spectral objects). These cover the whole image. In this paper, the integration of textural and spectral object has been done by mean the intersection between these two types of objects. However, other integration processes should be investigated. Table1 summarizes the number of objects obtained for each scale and for different values of minimum size of textural objects. Table 1. Objects number. Minimum size of objects Scale 1 Scale 2 Scale 3 5 3334 531 316 25 2782 350 260 50 2696 265 221 Several experiments have been conducted for the different sets of objects, which were characterized with distinct sub-sets of spectral and textural attributes indicated above. Results displayed in this paper correspond to the objects obtained for a minimum textural object size of 50 (third row of Table 1). Vectors of attributes have been defined with six components: mean values of the gray levels of each object for each spectral band, mean value of Wavelet coefficients for the considered scale and mean value of Lacunarity, calculated with a window size of 15. A different GCS has been trained for each set of objects associated to a scale of the Wavelet Transformation. That is, all vectors of attributes of the objects associated to a scale have been used as training pattern input of a GCS. And they has been forced to be clustered in five different groups in order to visually compare results with a pixel-based classified image (Fig. 4 a). A further spatialization of the classification 397

results have given the classified images displayed at Fig. 4 (b, c and d). It should be noted that the determined classes by the object-based classification method do not have the same meaning that the spectral classes of pixel-based classified image. Similar colors have been selected for both for better interpretation. In a visual comparison, it can be appreciated that the object-based classification method is able to detect the presence of vegetation (red color) on the roads (yellow color), while the pixel-based classified image is not. About growing areas (see Fig. 1 b), it can be seen that a better separation between vegetation (red), soil (green) and shadow (blue) is provided by the object-base classified images that for the pixel-based classified image. a) b) c) d) Figure 3: Pixel-based classified image. (b), (c) and (d) object-based classified images for three Wavelet levels. Zooms for three different growing areas of these images have been included in Fig. 3. There, it can be appreciated that the proposed method allows isolating trees in the three areas. However, it seems that the first scale is over-segmented. Further investigations must be conducted to reduce this effect. 4. Conclusions In this work, it has proposed a methodology that allows the identification and characterization of objects into a satellite scene, by merging information supplied by images with different spatial and spectral resolution. The set of objects and theirs vectors of attributes gives a simplified and joint representation of the information contained in the two source images used in this case (multispectral and panchromatic images). This representation facilitates the processing of such information, both from a conceptual standpoint, such as computational. Thus, these vectors of attributes have been used directly as input training patterns into a particular artificial neural network (Growing Cell Structures). 398

Preliminary unsupervised classification results have been included to show the potential of the proposed methodology. Now well, further investigations must be performed for enhancing it in all stages. a) b) c) d) e) f) g) h) i) Figure 4: Zooms for three different areas of the image (rows) and for the three first Wavelet levels (columns). Acknowledgements This work has been supported by Universidad Politécnica de Madrid (AL11-P(I+D)27), Fondo de Fomento al Desarrollo Científico y Tecnológico de Chile (FONDEF-D09I1069) y la Universidad de Concepción (DIUC 211.131.014-1.0). References [1] Bruce B. A, 2008. Object oriented classification: case studies using different image types with different spatial resolutions. International Society for Photogrammetry and Remote Sensing. (http://arrow.unisa.edu.au:8081/1959.8/64668). [2] Huang X, L. Zhang & P. Li, 2008. A multiscale feature fusion approach for classification of very high resolution satellite imagery based on wavelet transform. International Journal of Remote Sensing, (29) 20, 5923-5941. [3] Delgado S., C. Gonzalo, E. Martínez and A. Arquero, 2004. Improvement of self-organizing maps with growing capability for goodness evaluation of multispectral training patterns. Geoscience and Remote Sensing Symposium, IEEE International, 1, 564-567. [4] Dutilleux P., 1987. An implementation of the algorithm à trous to compute the wavelet transform. In Compt-rendus du congre`s ondelettes et m\`ethodes temps-fr\'equence et espace des phases, J. Combes, A. Grossman and Ph. Tehanitchian (Eds), 298-304 (Marseille: Springer-Verlag). 399

[5] Otsu N., 1979. A Threshold Selection method from gray-level histograms, IEEE Transactions on Systems, Man, and Cybernetics, (9) 1, 62-66. [6] Myint S. and N. Lam, 2005. A study of lacunarity-based texture analysis approaches to improve urban image classification, Computers, Environment and Urban Systems, (29) 5, 501-523. [7] Fritzke B., 1994. Growing Cell Structures - A Self-organizing network for unsupervised and supervised learning. Neural Networks, (7) 1, 1441-1460. 400