746A27 Remote Sensing and GIS Lecture 4 Digital image processing Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University
Digital Image Processing Most of the common image processing functions available in image analysis systems can be categorized into the following four categories Preprocessing Image Enhancement Image Transformation Image Classification and Analysis
Preprocessing Preprocessing functions involve those operations that are normally required prior to the main data analysis and extraction of information, and are generally grouped as radiometric or geometric corrections. Pre-processing operations, sometimes referred to as image restoration and rectification, are intended to correct for sensor- and platform-specific radiometric and geometric distortions of data.
Dropping of line or signal missing Strip in the image
Mosaic MOSAIC creates a new image by spatially orienting overlapping images and optionally balancing the numeric characteristics of the image set based on the overlapping areas. Input images must have the same data type, resolution and reference system. Removing the strip DESTRIPE removes the striping caused by variable detector output in scanner imagery. Radiance RADIANCE converts raw satellite Dn values to calibrated radiances using lookup tables of gain and offset setting for LANDSAT satellites 1-5 and user-defined values for Lmin/Lmax or Offset/Gain for other sensor systems. Conversion to radiances is used to facilitate comparisons between images from different dates.
Atmospheric correction o Here atmospheric effects are removed through different calculations. Four models are generally used for atmospheric correction: Dark Object Subtraction (DOS) model, Chavez's Cos(t) model Full radiative transfer equation model (FULL), and Apparent Reflectance Model (ARM) In each case, the input consists of a raw image band and a set of atmospheric and viewing condition parameters. The output in each case is an image of proportional reflectances, expressed in real number format as a value from 0.0 to 1.0.
Enhancement Satellite images are enhanced for easy visual interpretation and understanding of imagery. Although radiometric corrections for illumination, atmospheric influences, and sensor characteristics may be done prior to distribution of data to the user, the image may still not be optimized for visual interpretation. Image stretching In raw imagery, the useful data often populates only a small portion of the available range of digital values. This creates a lots of problem during classification. To overcome this problem image stretching is used.
Stretching Before After
Image composition Composite produces a 24-bit color composite image from three bands of byte binary imagery for display and visual analysis. In composite images land use types can be differentiated easily than a single band. 123 234
241 321
432 Making composite images in different sequences generates different results. This color variation occur due to interaction of the particular bands with the ground surface objects and placement of the bands during composition.
Spatial filtering encompasses another set of digital processing functions which are used to enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific features in an image based on their spatial frequency. By varying the calculation lation performed and the weightings of the individual pixels in the filter window, filters can be designed to enhance or suppress different types of features.
Low-pass filter A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of an image. High-pass filter High-pass filters do the opposite and serve to sharpen the appearance of fine detail in an image. Directional, or edge detection filters Directional, or edge detection filters are designed to highlight linear features, such as roads or field boundaries. These filters can also be designed d to enhance features which are oriented in specific directions. These filters are useful in applications such as geology, for the detection of linear geologic structures. t
Before filtering After filtering
Pansharpen Pansharpen uses a high-resolution panchromatic image to increase the spatial resolution of low-resolution multispectral images. Image Transformations Image transformations typically involve the manipulation of multiple bands of data, whether from a single multispectral image or from two or more images of the same area acquired at different times. Basic image transformations apply simple arithmetic operations to the image data. Image subtraction is often used to identify changes that have occurred between images collected on different dates.
1 st image 2 nd image Among other image transformation techniques Spatial ratioing, NDVI, PCA are commonly used. Resultant image
Image Classification Concept of image classification
Objectives of classification
RS image Classification
Image classification in remote sensing
Image classification in remote sensing
Multi layer perceptron classifier MLP undertakes the classification of remotely sensed imagery through a Multi-Layer Perceptron neural network classifier using the back propagation (BP) algorithm. The calculation is based on information from training sites. The user must first specify the classification option, whether to train the network or use existing weights. Kohonen s Self-Organizing Map (SOM) SOM undertakes both unsupervised and supervised classification of remotely sensed imagery using Kohonen s Self-Organizing g Map (SOM) neural network