Separating touching and overlapping objects in particle images A combined approach

Similar documents
Signature Region of Interest using Auto cropping

Canny Edge Detection

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Medical Image Segmentation of PACS System Image Post-processing *

Signature verification using Kolmogorov-Smirnov. statistic

Morphological segmentation of histology cell images

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Face detection is a process of localizing and extracting the face region from the

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Automatic Detection of PCB Defects

Automatic Extraction of Signatures from Bank Cheques and other Documents

Signature Segmentation from Machine Printed Documents using Conditional Random Field

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Colour Image Segmentation Technique for Screen Printing

Image Processing Based Automatic Visual Inspection System for PCBs

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

Technical Considerations Detecting Transparent Materials in Particle Analysis. Michael Horgan

ADVANTAGES AND DISADVANTAGES OF THE HOUGH TRANSFORMATION IN THE FRAME OF AUTOMATED BUILDING EXTRACTION

MAVIparticle Modular Algorithms for 3D Particle Characterization

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

Template-based Eye and Mouth Detection for 3D Video Conferencing

Image Segmentation and Registration

Designing and Drawing a Sprocket Visualizing ideas through the creation of CAD solid models is a key engineering skill.

Arrangements And Duality

Convolution. 1D Formula: 2D Formula: Example on the web:

Analecta Vol. 8, No. 2 ISSN

Thresholding technique with adaptive window selection for uneven lighting image

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

A New Image Edge Detection Method using Quality-based Clustering. Bijay Neupane Zeyar Aung Wei Lee Woon. Technical Report DNA #

Building an Advanced Invariant Real-Time Human Tracking System

Introduction. Chapter 1

SIGNATURE VERIFICATION

Euler Vector: A Combinatorial Signature for Gray-Tone Images

Galaxy Morphological Classification

Visual Structure Analysis of Flow Charts in Patent Images

Computer Vision for Quality Control in Latin American Food Industry, A Case Study

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION

Object Tracking System Using Motion Detection

Norbert Schuff Professor of Radiology VA Medical Center and UCSF

Determining optimal window size for texture feature extraction methods

3D Scanner using Line Laser. 1. Introduction. 2. Theory

STATIC SIGNATURE RECOGNITION SYSTEM FOR USER AUTHENTICATION BASED TWO LEVEL COG, HOUGH TRANSFORM AND NEURAL NETWORK

Recognition Method for Handwritten Digits Based on Improved Chain Code Histogram Feature

Machine vision systems - 2

Measuring Line Edge Roughness: Fluctuations in Uncertainty

Segmentation of building models from dense 3D point-clouds

More Local Structure Information for Make-Model Recognition

A new similarity measure for image segmentation

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)

Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.

Virtual Mouse Using a Webcam


Copyright 2011 Casa Software Ltd. Centre of Mass

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Ch9 Morphological Image Processing

Super-resolution method based on edge feature for high resolution imaging

Supporting Online Material for

DEVELOPMENT OF AN IMAGING SYSTEM FOR THE CHARACTERIZATION OF THE THORACIC AORTA.

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Automatic Traffic Estimation Using Image Processing

Determining the Resolution of Scanned Document Images

Indoor Surveillance System Using Android Platform

High-dimensional labeled data analysis with Gabriel graphs

Vision based Vehicle Tracking using a high angle camera

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

A Dynamic Approach to Extract Texts and Captions from Videos

Defect detection of gold-plated surfaces on PCBs using Entropy measures

Multimodal Biometric Recognition Security System

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

A Learning Based Method for Super-Resolution of Low Resolution Images

Digital image processing

Simultaneous Gamma Correction and Registration in the Frequency Domain

Holes & Selective Laser Sintering

" {h k l } lop x odl. ORxOS. (2) Determine the interplanar angle between these two planes:

2. MATERIALS AND METHODS

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

APPLYING COMPUTER VISION TECHNIQUES TO TOPOGRAPHIC OBJECTS

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

Mouse Control using a Web Camera based on Colour Detection

How To Filter Spam Image From A Picture By Color Or Color

Linear Threshold Units

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

A Method of Caption Detection in News Video

Automated Process for Generating Digitised Maps through GPS Data Compression

Enhanced LIC Pencil Filter

MECHANICAL PRINCIPLES HNC/D MOMENTS OF AREA. Define and calculate 1st. moments of areas. Define and calculate 2nd moments of areas.

Session 7 Bivariate Data and Analysis

Music Mood Classification

Transcription:

Separating touching and overlapping objects in particle images A combined approach Jose M. Korath, Ali Abbas 2, Jose A. Romagnoli 3 School of Chemical and Biomolecular Engineering, University of Sydney, Australia. 2 School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore. 3 Department of Chemical Engineering, Louisiana State University, USA. Determination of particle size distribution by image analysis is often rendered ineffective due to the presence of touching and overlapping particles in the acquired images. The size and shape estimation done on the image without separating the touching and overlapping objects lead to gross errors in estimated values of size and shape. The existing methods of separation like watershed segmentation and morphological processing suffer from problems like over segmentation and relatively large processing times. A new automatic algorithm to separate the touching and overlapping particles based on the intensity variations in the regions of touching and overlapping particles and geometric features of boundary curves is developed. The performance of the algorithm is compared with that of the watershed segmentation algorithm as well as visual examination of images.. Introduction Online measurement of particle size distribution (PSD) can be of utmost importance in the monitoring and control of many particulate processes in the pharmaceutical and fine chemicals industries amongst others. However many techniques of direct measurement are not suitable for in-situ/online characterization purposes. Direct real-time visual characterization is a promising alternative to complex and expensive particle measurement systems and attempts in this direction have been reported in literature (Nazar et al. (996), Watano and Miyanami (995)). However, this field still needs research and development particularly in overcoming the problem of touching and overlapping particles. There always exists touching and/or overlapping particles in images taken from particulate systems where no particle dispersion is applied prior to image capture. There are processes that are not amenable to dispersion, those of fragile nature like flocs and certain crystalline particles. Development of an algorithm which can address issue of touching/overlapping objects is the concern of the present paper. The existing approaches invariably take a priori knowledge about the shape of the particles as input (Honakanen et al. 2005; Shen et al. 2000). Such an assumption does not hold good for many real life industrial systems. The approach adopted in this paper is based on capturing the intensity variations along the regions of touch/overlap and using this information to separate such particle aggregates. Wherever intensity variation information alone is not conclusive enough to identify such regions, we use geometric features of boundary curves as complementary information.

2. Methods and analyses The major steps involved in processing of an image are preprocessing, segmentation and feature extraction. Though many image processing methodologies exist for each of these stages, the selection of appropriate methods suitable for the particular genre of images is very important for achieving best results. We next discuss some details of these steps including preprocessing, capturing of intensity variations, binarizing, extraction of geometric features of boundary curves and separation of the touching and overlapping regions. 2. Preprocessing Median filtering is done on the image with a kernel size of [3x3] to remove random noise. This filter is an order statistic filter which replaces the value of a pixel by the median of the pixel values in a small neighborhood (Gonzalez and Woods, 2002): f(x,y) = median{g(u,v)} () where (u, v) є S(x,y) the neighborhood around (x,y). Fig. shows the original image after median filtering. 2.2 Segmentation Once the image is denoised, identification of objects has to be done and this process is in general known as segmentation. A large number of segmentation techniques are present in the literature (Pal and Pal, 993) but in the images of particulate systems, the objects (particles) and background differ distinctly in terms of the brightness value. So we can separate the objects from background by thresholding which is the process in which a criterion based on pixel intensity value is used. 2.2. Thresholding We here use Otsu s method of global thresholding based on image histogram (Otsu, 979). In this method the pixels are divided into two classes such that the mean of each class is as separated as possible and the variance within each class is as least as possible and class boundary is the threshold value. The gray scale image (I g ) and the binary image (I bw ) can be represented by the following equations I g = {x 0 x U}; I bw = {x x є (0,)} (2) where U is the upper limit of the intensity which has a value of 255 for a grayscale image. The binarized image is shown in Fig. 2. It can be seen that most of the touching regions are accounted for as belonging to the object regions (white) resulting in a single large particle in the center of the image. Feature extraction based on such incompletely segmented image will produce erroneous results. The traditional method of watershed segmentation (Russ,

2002) for touching/overlapping objects suffers from severe over segmentation when objects are not of regular shapes. The following sections describe the proposed new algorithm. Fig.. Denoised gray scale image. Fig. 2. Image after thresholding 2.2.2 Capturing the dip in intensity across touching and overlapping regions The touching and overlapping regions have intensity variations characterized by local dips in intensity values on account of the relatively less light being reflected from these regions compared to other areas of the particulate objects. The graph on the right in Fig. 3 shows variation of intensity along the horizontal line in the gray scale image in the same Fig. on the left. The intensity dip at a touching region is marked with an oval. 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0. 0 200 400 600 800 000 200 Fig. 3. Intensity variation along a horizontal line. Fig. 4. Computed dip line. The most obvious and common method to identify these regions is by calculating the gradient (Canny, 986). However, the intensity change at many of these dips is not very rapid and would not have an appreciable value for the absolute gradient. Hence instead of gradient calculation, we identify bottom points of above mentioned dips by scanning the image pixels in various directions. Collection of such points forms what we call a dip line which identifies a touching or overlapping region. We are not interested in gradual changes in the background area and hence dip points contained only in the object area are retained. The dip lines thus captured are shown in Fig. 4. Morphological dilation is done to bridge small gaps due to random noise in an otherwise continuous line. Utilization of these dip lines for separation of touching and overlapping particles is described in a later section. 2.2.3 Extracting geometric features of boundary curves Another key feature of touching/overlapping regions is existence of sharp turns (concavity) in the vicinity of intersection or tangency of the boundaries of two objects. Moreover, the wedges formed on both sides of a touch/overlap is oriented in opposite direction. Several

methods have been reported in literature (Shen et al., 2000; van den Berg et al., 2002) for quantifying the magnitude and orientation of the concavity or wedgeness of boundary curves and we are using the concept of center of gravity (COG) and local eccentricity here. In this method, for a particular pixel on the boundary the coordinates of COG is calculated by the following formulae: X N ; = cog x i N i= Y N = (3) cog y i N i= where x i and y i are relative coordinates of the N neighboring points (also lying on the boundary) with respect to the particular pixel i. Then the eccentricity is calculated as ε = 2 2 X + (4) cog Y cog The orientation vector of COG relative to the pixel under consideration is represented by its coordinates (X cog, Y cog ). The magnitude of eccentricity and orientation vector of the wedge point are used (wherever necessary) in conjunction with dip lines to provide a robust separation methodology for touching and overlapping objects as discussed in the next section. 2.2.4 Separation of the touching and overlapping particles To achieve separation of touching/overlapping particles, we further process the threshold image with the additional information of dip lines. The image of dip lines in Fig. 4 is subtracted from the binary image (Fig. 2) and the result is shown in Fig. 5. Separation has been achieved at many touching regions. But we have black pixels on the object body as well corresponding to dip lines entirely contained in the objects. To decide whether a dip line represents a minimum at touching/overlap region, we propose the following logic: wherever particles are touching or overlapping, the dip line starts from the boundary. The reasoning behind this is that the intensity variation is more prominent where the boundaries of particles cross. Based on this reasoning, we exclude all dip lines which are not extending up to the boundary of the objects. The resulting picture is shown in Fig. 6. It can be seen that apart from dip lines which completely separate touching/overlapping regions, we have dip lines which start from boundary and running into the object body but failing to reach the boundary curve on opposite side so that a complete separation is achieved. We refer to these kinds of dip lines henceforth as burrows. Among these burrows we can identify different kinds - some are of very short length, while others form a pair seemingly coming from a continuous line broken by some noise, yet others run very deep into objects and almost extend to boundaries on opposite sides but just stop short of piercing through. Ignoring the burrows which are very short in length, we utilize the two latter categories of burrows in conjunction with the geometric features obtained by methods explained in section 2.2.3. The close tips of a pair of burrows are joined if the following two conditions of boundary curve geometry are satisfied: i) the pixels on the boundary curve in the vicinity of the starting point of the burrow (on the boundary) has a considerable value for the eccentricity and ii)

the orientation vectors at the start of each burrow in the pair have opposite orientation. The result of such a joining is shown in Fig. 7 (marked by an oval). There is no second category of unpaired burrows in this particular image. However, in such cases the distance between the nearest point of high eccentricity on the boundary and the tip of the burrow is calculated first. If this distance is small enough (less than a threshold) then the orientation checks are done as before and the burrow tip is connected to the boundary point on opposite side completing the separation. Fig. 5. Binary image with dip lines. Fig. 6. Image after dip line processing. Fig. 7. After joining burrow tips. 3. Results and Discussion Once the touching and overlapping objects are separated, individual objects are labeled and size feature is determined. The feature used to represent the size is the equivalent diameter which is the diameter of the circle of equivalent area as the object. The comparison of PSD of the objects in Fig. before and after application of separation algorithm is given in Figs 8 and 9. 8 8 7 7 6 6 FRequency 5 4 3 Frequency 5 4 3 2 2 0 50 00 50 200 250 300 350 400 450 500 550 600 Equivalent Dia (pixels) Fig. 8. PSD of non-separated objects in Fig.. 0 50 00 50 200 250 300 350 400 450 500 550 600 Equivalent Dia (pixels) Fig. 9. PSD after application of algorithm. The effect of touching/overlapping on the PSD is clearly evident in this comparison where the PSD of the image with no separation is biased towards higher sizes as expected due to the agglomeration of image object. Further our results are compared against traditional watershed segmentation technique results (Figs 0-). It can be seen that watershed method suffers from over segmentation and hence a large number of small particles are generated and evident in the PSD of Fig.. Our method shows superior and far less error in separation compared to traditional watershed segmentation technique.

50 00 50 200 250 300 350 400 450 500 550 600 300 250 200 Frequency 50 00 50 0 Fig. 0. Result under watershed segmentation. Equivalent Dia (pixels) Fig.. PSD under watershed segmentation. 4. Conclusions and future work A combined approach based on intensity variations and geometric features of boundary curve is proposed to achieve separation of touching and overlapping objects in images of particulate systems. Instead of normal approach of gradient calculation to identify characteristic variations in intensity, trapping of local dips in intensity along regions of touch/overlap was effectively employed. Wherever the information from intensity variation was not complete, geometric features of boundary curve was used as complementary information to complete the separation. Multi-resolution analyses are currently being pursued to improve the effectiveness of capturing the intensity dips so that loss of intensity variation information can be curtailed. 5. References Canny, J., 986, A Computational approach to edge-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8, 679-698 Gonzalez, R.C. and R.E. Woods, 2002. Digital Image Processing, Prentice Hall. Honakanen, M., P. Saarenrinne, T. Stoor and J. Niinimaki, 2005, Recogniton of highly overlapping ellipse-like bubble images. Measurement Science and Technology, 6, 760-770. Nazar A.M., F.A. Silva and J.J. Ammann, 996, Image Processing For Particle Characterization. Materials Characterization, 36, 65-73. Otsu, N., 979, A threshold selection method from gray level histogram. IEEE Transactions on Systems Man and Cybernetics, 9, 62-66. Pal N R. and S.K. Pal, 993, A review of image segmentation techniques. Pattern Recognition, 26, 277-294. Russ, J.C., 2002. The Image Processing Handbook, CRC Press. Shen, L., X. Song, M. Iguchi and F. Yamamoto, 2000, A Method for Recognizing Particle in overlapped Particle Images. Pattern Recognition Letters, 2, 2-30. van den Berg EH, A.G.C.A. Meesters, J.A.M. Kenter and W. Schlager, 2002, Automated separation of touching grains in digital images of thin sections, Computers and Geosciences, 28, 79-90. Watano, S. and K. Miyanami, 995, Image Processing for on-line monitoring of granule size distribution and shape in Fluidized bed granulation. Powder Technology, 83, 55-60.