1. Introduction Image segmentation is the problem of partitioning an image into homogeneous regions that are semantically meaningful with respect to some characteristic lie intensity or texture [9]. Segmentation is not concerned with determining what the partitions are. Mathematically if the domain of the image is given by I, then the segmentation problem is to determine the sets S j whose union is the entire image I. Thus, the sets that mae up segmentation must satisfy I = n S j j= 1 Where s j s = φ for j and each S j is connected and n is the number of objects of interest. Thus a goal of segmentation method finds those sets that correspond to distinct anatomical structures or regions that is of interest in the image. Simply looing at an image, a human observer may not properly visualize structures which he wants to identify. Image segmentation plays an essential role in many medical imaging applications because it can better explain anatomical structures and other regions of interest. For example automated magnetic resonance imaging (MRI) segmentation systems classify brain voxels into one of three main tissue types [1]: gray matter (Gm), white matter (Wm), and cerebrospinal fluid (Csf). It is also important in various biomedical imaging applications lie quantification of tissue volumes, diagnosis [2], localization of pathology, study of anatomical structure [3], treatment planning, partial volume correction of functional imaging data, and computer integrated surgery. When the constraint by which regions be connected is removed then determining the sets S is called pixel classification and sets themselves are called classes. Pixel classification is a desirable goal in medical images, particularly when disconnected regions belonging to the same tissue class then it need to be identified. Determination of the total number of classes in pixel classification can be a difficult problem [4]. Labeling is the process of assigning to each region or class which can be performed separately from segmentation. In medical imaging, the labels are often visual and can be determined only by a physician or technician. Computer automated labeling is desirable when labels are not obvious in automated processing systems. A typical situation involving labeling occurs in digital mammography where the image is segmented into distinct regions and the regions are subsequently labeled as being healthy or cancerous tissue. In nest section we describe segmentation. 2. Segmentation methods The segmentation process is perhaps the most important step in image analysis since its performance directly affects the performance of the subsequent processing steps in medical image analysis. Despite its importance, segmentation still remains an unsolved problem in the general sense, as it lacs a general mathematical theory. The two main difficulties of the segmentation problem are it s under constrained nature and lacs of the correct segmentation. As a consequence of these shortcomings, a large number of techniques have been developed in the literature [7]. Selection of one segmentation method over another
only depends on the type of image that needs to be segmented. So there is no universal method that can be successfully applied to all types of images. Segmentation method which is developed for one type of image might not be applied to other images, or applied with low performances. We divide segmentation methods into the following main categories [9]: a. Thresholding approach b. Region growing approach c. Classifiers d. Clustering approach e. Marov random field models f. Artificial neural networ g. Deformable models h. Atlas guided approach [12] Most of the image segmentation methods depend on the optimization problems where the desired segmentation minimizes some energy or cost function which is defined by the particular application. In probabilistic methods, this is equivalent to maximizing a lielihood or a posteriori probability. Thus, for a given image Y, we describe segmentation such that X = arg minξ ( X, Y ) X Whereξ, the energy function depends on the observed image Y and a segmentation X. Defining an energy functionξ is a difficult problem because there are many variety of image properties that can be used, such as intensity, edges and texture. Here, we describe few methods in detail. a. Thresholding approach: Thresholding [6] is a simple but effective method for obtaining medical image segmentation where different structures have contrasting intensities or other experimental features. In this approach images are created by a binary partitioning of the image intensities. A thresholding procedure attempts to determine an intensity value, called the threshold, which separates the desired classes. The segmentation is then achieved by grouping all pixels with intensity greater than the threshold into one class, and all other pixels into another class. In other words, the threshold image g(x, y) of image f(x, y) is defined as if f(x, y) T g(x,y) = { 1 0 if f(x, y) < T Where T selected threshold is to be determined. Thresholding is often used as an initial step in a sequence of image processing operations. Although the approach is simple but its main limitation is that mainly two classes are generated and it can not be applied to multi-channel images. So it is responsive to noise and intensity inhomogeneities, which can occur in magnetic resonance images. b. Region growing approach: Region growing [6] is a technique that groups pixel or sub region into larger regions based on some predefined criteria for growth. These criteria can be based on intensity information or edges in the image. The basic approach is to start with a set of seed points that is manually selected and from these seed point region is 365
grown by appending to each seed neighboring pixels that have predefined properties similar to it. Selecting a set of one or more starting points often can be based on the nature of the problem. Its primary disadvantage is that it requires manual interaction to obtain the seed point. Thus, for each region that needs to be extracted, a seed must be planted. Split and merge algorithms are also related to region growing but do not require a seed point. Region growing can also be sensitive to noise, causing extracted regions to have holes. c. Classifiers: Classifier methods are pattern recognition techniques that see to partition a feature space derived from the image using data with nown labels [7]. Classifiers are nown as supervised methods because they require training data that are manually segmented. And then it used as references for automatically segmenting new data. There are a number of ways in which training data can be applied in classifier methods. A simple classifier is the nearest-neighbor classifier, where each pixel or voxel is classified in the same class as the training data with the closest intensity and other is -nearest-neighbor (NN) classifier which is a generalization of nearest neighbor classifier. Parzen window is nonparametric classifier where the classification is carried out according to the majority vote within a predefined window of the feature space centered at the unlabeled pixel intensity. A commonly-used parametric classifier is the maximum lielihood (ML) or Bayes classifier. It assumes that the pixel intensities are independent samples from a mixture of probability distributions. This mixture, called a finite mixture model is given by the probability density function K f(y j ; θ, π) = π f (y ; θ ) = 1 j Where y j is the intensity of pixel j, f is a component probability density function parameterized by θ and θ = ( θ 1,... θ K ). The variables π are mixing coefficients that weight the contribution of each density function and π = π... π ). ( 1, K It is non-iterative method unlie thresholding methods. These can be applied to multichannel. Another disadvantage of classification based segmentation is the requirement of manual interaction for obtaining training data. Training sets can be acquired for each image that requires segmenting, but this can be time consuming and laborious. On the other hand, use of the same training set for a large number of scans can lead to unfair results. d. Clustering approach: Clustering algorithms segmentation is different from classifier based methods as it does not require any training data. This is an unsupervised method. Because of lac of training data, clustering methods iterate between segmentation of the image and characterize the properties of each class. Thus clustering methods train themselves using the available data. Some commonly used clustering algorithms are the - means [8] and the expectation-maximization (EM) algorithm. The -means clustering algorithm clusters data by iteratively computing a mean intensity for each class and segmenting the image by classifying each pixel in the class with the closest mean. The fuzzy c-means algorithm generalizes of the means algorithm, allowing for soft segmentations based on fuzzy set theory. The EM algorithm applies the same clustering principles with the underlying assumption that the data follows a Gaussian mixture model. It iterates between computing the posterior probabilities and computing maximum lielihood estimates of the means, covariances, and mixing coefficients of the mixture model. 366
Clustering method require an initial segmentation. The EM algorithm does not directly incorporate spatial modeling and can therefore be sensitive to noise and intensity inhomogeneities. f. Artificial neural networ: Artificial neural networs [5, 10] are particularly parallel networs of processing elements or nodes that suggest biological learning. Each node in an artificial neural networ is capable of performing elementary computations. Learning is achieved through the alteration of weights assigned to the connections between nodes. Artificial neural networ represent a model, for machine learning and can be used in a variety of ways for image segmentation. The most widely used in medical imaging is as a classifier where the weights are determined using training data and the artificial neural networ is then used for segmentation of new data. Artificial neural networ can also be used in an unsupervised fashion as a clustering method. Although artificial neural networ are naturally parallel. However their processing is usually simulated on a standard serial computer, thus reducing this potential computational advantage. g. Deformable models: Among model-based techniques, deformable models [11] offer a unique and powerful approach to image analysis that combines geometry, physic and approximation theory. It is model-based techniques for delineating region boundaries using closed parametric curves or surfaces that deform under influence of internal and external forces. To delineate an object boundary in an image, a closed curve or surface must first be placed near the desired boundary and then allowed to undergo an iterative relaxation process. Internal forces are computed from within the curve or surface to eep it smooth throughout the deformation. External forces are usually derived from the image to derive the curve or surface towards the desired feature of interest. There are many deformable models such as energy minimizing deformable model, dynamic deformable model and probabilistic deformable model. The mathematical form of deformable models represents the confluence of geometry, physics, and approximation theory. Geometry serves to represent object shape, physics imposes constraints on how the shape may vary over space and time and optimal approximation theory provides the formal mechanisms for fitting the models to measured data. And it moves according to its dynamic equations and sees the minimum of a given energy function. The deformation of a typical 2-D deformable model can be characterized by the following dynamic equation: 2 s( s, t) x( s, t) µ ( s ) + γ ( s) = Fint + F 2 ext t Where X(s,t) = (x(s,t),y( s,t)) is a parametric representation of the position of model at a given time t, µ (s) and γ (s) are parameters representing the mass density and damping density of the model, F int and F ext are internal and external forces respectively. The main advantages of deformable models are their ability to directly generate closed parametric curves or surfaces from images and their inclusion of a smoothness constraint. A disadvantage is that they require manual interaction to place an initial model and choose suitable parameters. t 367
References: A. Mayer and H. Greenspan, An adaptive mean-shift framewor for MRI brain segmentation, Trans. IEEE, Vol. 28, No.6, pp. 1238-1250, 2009. Taylor, Invited review computer aids for decision-maing in diagnostic radiology a literature review, Brit. J. Radiol, pp.945 957, 1995. A. J. Worth, N. Maris, V. S. Caviness, and D. N. Kennedy, Neuroanatomical segmentation in MRI, Int. J. Patt. Rec. Art. Intel, pp.1161 1187, 1997. D. A. Langan, J. W. Modestino, and J. Zhang, Cluster validation for unsupervised stochastic model based image segmentation, IEEE T. Im. Process, Vol.7, pp.180 195, 1998. J. Sheng Lin and C. Mao, The application of competitive Hopfield neural networ to medical image segmentation, Trans. IEEE, Vol.15, No.4, August 1996. R. C. Gonzalez and R. E. Wood, Digital image processing using Matlab, Pearson Education,Inc. and Dorling Kindersley Publishing Inc. J. C. Bezde, L. O. Hall, and L. P. Clare, Review of MR image segmentation techniques using pattern recognition, Med. Phys.,Vol. 20, pp.1033 1048, 1993. G.B. Coleman and H.C. Andrews, Image segmentation by clustering, Proc. IEEE, Vol.5, pp.773 785, 1979. R. M. Haralic and L.G. Shapiro, Image segmentation techniques, Comput. Vis. Graph. Im. Proc. Vol. 29, pp.100 132, 1985. J. W. Clar, Neural networ modeling, Phys. Med. Biol., Vol.36, pp.1259 1317, 1991. T. McInerney and D. Terzopoulos, Deformable models in Medical Analysis: A survey, University of Toronto, Canada, pp 91-108, 1996. S. Z. Li, Marov random field modeling in computer vision, Springer, 1995. 368