Automatic recognition of cortical sulci of the human brain using a congregation of neural networks

Size: px
Start display at page:

Download "Automatic recognition of cortical sulci of the human brain using a congregation of neural networks"

Transcription

1 Medical Image Analysis 6 (2002) locate/ media Automatic recognition of cortical sulci of the human brain using a congregation of neural networks ` * Jean-Marc Martinez, Vincent Frouin, Jean Regis a, a a Denis Riviere, Jean-François Mangin, Dimitri Papadopoulos-Orfanos, b a c Service Hospitalier Frederic Joliot, CEA, 4place du General Leclerc, Orsay, France b Service d Etude des Reacteurs et de Mathematiques Appliquees, CEA, Saclay, France c Service de Neurochirurgie Fonctionnelle et Stereotaxique, La Timone, Marseille, France a Received 24 January 2001; received in revised form 18 June 2001; accepted 21 September 2001 Abstract This paper describes a complete system allowing automatic recognition of the main sulci of the human cortex. This system relies on a preprocessing of magnetic resonance images leading to abstract structural representations of the cortical folding patterns. The representation nodes are cortical folds, which are given a sulcus name by a contextual pattern recognition method. This method can be interpreted as a graph matching approach, which is driven by the minimization of a global function made up of local potentials. Each potential is a measure of the likelihood of the labelling of a restricted area. This potential is given by a multi-layer perceptron trained on a learning database. A base of 26 brains manually labelled by a neuroanatomist is used to validate our approach. The whole system developed for the right hemisphere is made up of 265 neural networks. The mean recognition rate is 86% for the learning base and 76% for a generalization base, which is very satisfying considering the current weak understanding of the variability of the cortical folding patterns Elsevier Science B.V. All rights reserved. Keywords: Neural networks; Cortical sulci; Folding patterns; Automatic recognition system 1. Introduction deformable atlas framework at the cortex level (Mangin et al., 1995b; Collins et al., 1998; Hellier and Barillot, 2002; The development of image analysis methods dedicated Lohmann and von Cramon, 2000; Cachier et al., 2001). to automatic management of brain anatomy is a widely Two main issues have to be addressed: addressed area of research. A number of works focus on 1. What are the features of the cortex folding patterns the notion of deformable atlases, which can be elastically which should be matched across individuals? While transformed to reflect the anatomy of new subjects. An some sulci clearly belong to this set of landmark exhaustive bibliography of this approach initially proposed features because they are usually considered as by Bajcsy and Broit (1982) is largely beyond the scope of boundaries between different functional areas, nobody this paper (see (Thompson et al., 2000) for a recent knows to which extent secondary folds should play the review). The complexity and the striking inter-individual same role (Regis et al., 1989, 1995). Some answers to this important issue could stem from foreseeable ad- vances in mapping brain functional organization (Watson et al., 1993) and connectivity (Poupon et al.). While the number of reliable landmarks to be matched is today variability of the human cortex folding patterns, however, have led several groups to question the behaviour of the *Corresponding author. Tel.: ; fax: addresses: riviere@shfj.cea.fr (D. Riviere), ` wwwdsv.cea.fr (D. Riviere). ` / 02/ $ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S (02)00052-X relatively limited, comparison of deformable atlas methods at the cortex level should focus on the pairing of these landmarks.

2 78 D. Riviere ` et al. / Medical Image Analysis 6 (2002) Deformable atlas methods rely on the optimization of Our approach may be considered as a symbolic version some function which realizes a trade-off between of the deformable atlas approach. The framework is made similarity to the new brain and deformation cost. up of two stages. An abstract structural representation of Whatever the approach, the function driving the de- the cortical topography is extracted first from each new formations is non-convex. When high-dimensional de- T1-weighted MR image. This representation is supposed to formation fields are used, this non-convexity turns out include all the information required to identify sulci. A to be particularly problematic since standard optimi- contextual pattern recognition method is then used to label zation approaches are bound to lead to a local optimum. automatically cortical folds. This method can be inter- While multi-resolution methods may guarantee that an preted as a graph matching approach. Hence, the usual interesting optimum is found, the complexity of the iconic anatomical template is replaced by an abstract cortical folding patterns implies that a lot of other structural template. The one to many matching between the similar optima exist. An important issue is raised by template nodes and the nodes of one structural representathis observation: is the global optimum the best one tion is simply a labelling operation. This labelling is driven according to the pairing of sulcal landmarks? The by the minimization of a global function made up of local answer to this issue should be taken into account when potentials. Each local potential is a measure of the comparing various approaches. likelihood of the labelling of a restricted cortex area. This To overcome some of the difficulties related to the non- potential is given by a virtual expert in this area made up convexity of the problem, several teams have proposed to of a multi-layer perceptron trained on a learning database. design composite similarity functions relying on manual While the complexity of the preprocessing stage reidentifications of the main sulci (Thompson and Toga, quired by our method may appear as a weakness compared 1996; Collins et al., 1998; Vailland and Davatzikos, 1999). to the straightforward use of continuous deformations, it These composite functions impose the pairing of homolo- results in a fundamental difference. While the evaluation of gous sulcal landmarks. While a lot of work remains to be functions driving continuous deformations is costly in done along this line, this evolution seems required to adapt terms of computation, the function used to drive the the deformable atlas paradigm to the human cortex. This symbolic recognition relies on only a few hundred labels new point of view implies a preprocessing of the data in and can be evaluated at a low cost. Hence, stochastic order to extract and identify automatically these sulcal optimization algorithms can be used to deal with the landmarks, which is the subject of our paper. non-convexity problems. In fact, working at a higher level The various issues mentioned above have led us to of representation leads to more efficiency for the pattern initiate a long term project aiming first at a better under- recognition process, which explains an increasing interest standing of the cortical folding patterns (Mangin et al., in the community (Lohmann and von Cramon, 1998, 2000; 1995a; Regis et al., 1995), and second at the automatic Le Goualher et al., 1998, 1999). identification of the main sulci (Mangin et al., 1995b). In the following, the second section summarizes the During a feasibility study, this project led to a first main steps of the preprocessing stage. The third section generation of image analysis tools extracting automatically gives an overview of the building-up of a database of each cortical fold from a T1-weighted MR image. Then, a manually labelled brains used to teach cortical anatomy to sophisticated browser allowed our neuroanatomist to navi- the pattern recognition system. The fourth section introgate through various 3D representations of the cortical duces the probabilistic framework underlying the graph patterns in order to identify the main sulci. This visualiza- matching procedure. The fifth section focuses on the tion tool led to the creation of a database of brains in training of the artificial neural networks. The sixth section which a name was given to each fold. This database was describes the stochastic minimization heuristics and some used to train an automatic sulcus recognition system based results. Finally, the last section highlights the fact that on a random graph model. Any cortical folding pattern was improving the current system will require collaborative considered as a realization of this model, which led us to work with various neuroscience teams. formalize the recognition process as a consistent labelling problem. The solution was obtained from a maximum a posteriori estimator designed in a Markovian framework. 2. The preprocessing stage While this first tool generation has been used for four years for the planning of depth electrode implantation in the This section describes briefly the robust sequence of context of epilepsy surgery (about 40 operations), a treatments that automatically converts a T1-weighted MR number of serious flaws had to be overcome to allow a image into an abstract structural representation of the wider use of the toolbox. This paper gives an overview of cortical topography. The whole sequence requires about the second tool generation with emphasis on the more half an hour on a conventional workstation. All the steps important improvement, which consists in using standard have been validated with at least 50 different images, some neural nets to build a better model of the random graph of them with several hundred. These images have been probability distribution. acquired with 6 different scanners using various MR

3 D. Riviere ` et al. / Medical Image Analysis 6 (2002) sequence parameters. Several experiments have led us to added to get robust behaviour are beyond the scope of the select inversion recovery sequences as the best choice for paper. our purpose. Most of the treatments rely on several years of fine tuning which assures today a robust behaviour with 2.1. Bias correction (Fig. 1(B)) non-pathological images. Further work has to be done to deal with the pathologies that invalidate some of our The first step aims at correcting the standard inhomoassumptions. The system should be rapidly facilitated with geneities in MR images. This is achieved using a smooth an interface allowing a step by step check of intermediate multiplicative field which minimizes the entropy of the results and proposing alternative treatments in case of corrected image intensity distribution. This method can be problems. The following descriptions focus on the main used without adaptation with various MR sequences ideas behind each treatment. Most of the refinements because the underlying hypothesis is only low entropy of Fig. 1. A sketch of the sequence of image analysis treatments (G and J 3D renderings represent views from inside white matter).

4 80 D. Riviere ` et al. / Medical Image Analysis 6 (2002) the actual distributions of each tissue class (Mangin, 2000; skeletonized. This skeletonization is done using a Likar et al., 2000). homotopic erosion that preserves the initial topology. An important refinement relative to our previous work (Man Histogram analysis (Fig. 1(C)) gin et al., 1995a) is the use of a watershed like algorithm embedded in the erosion process. The landscape driving The second step leads to estimations of the gray and the water rise is the mean curvature of the MR image white matter mean and standard deviations. It relies on a isosurfaces, which is used to mark ridges corresponding to scale-space analysis of the histogram which is robust to the medial localization of cortical folds. Topologically modifications of the MR sequence (Mangin et al., 1998). simple points (Malandain et al., 1993) are iteratively removed from the initial object according to a sequence of 2.3. Brain segmentation (Fig. 1(D)) increasing altitudes. As soon as a point verifies the topological characterization of surface points (Malandain The parameters given by the previous step are used to et al., 1993), it is preserved until the end of the process. segment the brain. This result is obtained following the Some pruning procedures remove curves from the final standard mathematical morphology sketch (erosion, selec- result in order to yield a skeleton made up of discrete tion of the largest connected component, reconstruction). surfaces. Two important refinements have been added for robustness: a regularized binarization using a standard Markov 2.7. Simple surfaces (Fig. 1(H,I)) field based model, and additional morphological treatments to prevent morphological opening of thin gyri (Mangin et Skeleton points connected to the outside are first al., 1998). gathered to represent the hemisphere hull. The remaining part of the skeleton is then segmented into topologically 2.4. Hemisphere separation (Fig. 1(E)) simple surfaces, which will represent cortical folds. This algorithm relies on the topological characterization pro- A second sequence of morphological processing is used posed by Malandain et al. (1993). Simple surfaces are to separate both hemispheres from the rest of the brain. defined from an equivalence relationship defined for a set This algorithm, which is similar to the previous one, is of surface points. A refinement relative to previous work applied to a regularized segmentation of white matter. A (Malandain et al., 1993; Mangin et al., 1995a) consists of priori knowledge on the brain orientation is used to select an erosion of the initial set of skeleton surface points at the the seeds which are reconstructed to get three objects: the level of junction points. This erosion aims at improving the white matter of each hemisphere and the cerebellum/ stem robustness of the split. The standard equivalence relationwhite matter. A second reconstruction recovers the gray ship then provides simple surface seeds. A morphological matter of each object (Mangin et al., 1996). A standard reconstruction yields the complete simple surfaces. affine spatial normalization could be used in the future to get rough mask of the hemispheres that may be used to 2.8. Buried gyri (Fig. 1(J)) increase the robustness of the seed selection (Friston et al., 1995). All the following steps are applied independently to The previous segmentation of the skeleton is not suffieach hemisphere. cient to separate all of the cortical sulci. Indeed, some of the simple surfaces sometimes include several sulci, which 2.5. The gray/csf union (Fig. 1(F)) is not tractable for our symbolic recognition process. According to our anatomical research hypothesis (Regis et This step aims at segmenting an object with a spherical al., 1995; Manceaux-Demiau et al., 1997), this situation is topology. Its external interface is the hemisphere hull related to the fact that some gyri can be buried in the depth defined by a morphological closing and its internal inter- of the folds. Since our recognition process is based on a face is the gray/ white boundary. This segmentation is labelling using the sulcus names, we have to assure as far achieved using a sequence of homotopic deformations of as possible that the preprocessing yields an oversegmentathe hemisphere bounding box (Mangin et al., 1995a). The tion of the sulci. Therefore, the previous simple surfaces topological constraints assure the robustness of the follow- are split according to a detection of putative buried gyri. In ing treatments. The detection of the gray/white boundary our opinion, these gyri can be revealed by two kinds of relies on the minimization of a Markov field like global clues: local minima of the geodesic depth along the bottom energy including the usual regularization provided by the of the fold, and points with negative Gaussian curvature on Ising model. the gray/ white boundary. This point of view, which is related to the approach of Lohmann and von Cramon 2.6. Skeletonization (Fig. 1(G)) (1998), led us to design the following algorithm, which is inspired by the usual morphological construction of the The gray/csf object provided by the previous step is catchment basins dual to a watershed line. First, points of

5 D. Riviere ` et al. / Medical Image Analysis 6 (2002) the gray/ white interface having a negative Gaussian the closest points of the linked nodes. The resulting curvature are removed from the gray/ CSF object. Then, attributed graph is supposed to include all the information consistent local maxima of the distance to the hull required by the sulcus recognition process. geodesic to the remaining gray/ CSF domain are detected. They represent the seeds of the catchment basins. The basins are then reconstructed following the usual water rise approach using the inverse of the previous distance for the 3. The learning database altitude. Finally, simple surfaces which belong to several catchment basins are split according to the basin parcella- Our preprocessing tool can be viewed as a compression tion. system which provides for each individual brain a synthetic description of the cortex folding patterns. A sophisti Graph construction (Fig. 2) cated 3D browser allows our neuroanatomist to label manually each node with a name chosen in a list of The objects provided by the last step are finally gathered anatomical entities. The lack of a validated explanation of in a structural representation which describes their relation- the structural variability of the human cortex is an imships. Three kinds of links are created between these nodes portant problem during this labelling. Indeed, standard (cf. Fig. 2): rt links represent splits related to the simple sulci are often split into several folds with various consurface definition; rp links represent splits related to the nections, which leads to ambiguous configurations (Ono et presence of a putative buried gyrus (the pli de passage al., 1990). anatomical notion (Regis et al., 1995)); and rc links It has to be understood that this situation prevents the represent a neighborhood relationship geodesic to the definition of an unquestionable gold standard to be reached hemisphere hull. This last type of link is inferred from a by any sulcus recognition method. Therefore, one of the Voronoı diagram computed conditionally to the hemi- aims of our research is to favour the emergence of new sphere hull using the set of junctions between hull and anatomical descriptions relying on smaller sulcal entities nodes as seeds (Mangin et al., 1995a). The resulting graph than the usual ones. According to different arguments that is enriched with numerous semantic attributes which will would be too long to develop in this paper, these units, the be used by the recognition system. Some of these attributes primary cortical folds that appear on the fœtal cortex, are are computed relative to the well-known Talairach refer- stable across individuals; a functional delimitation meaning ence system, which is computed from the manual selection is probably attached to them (Regis et al., 1995). During of anterior and posterior commissures but will be inferred ulterior stages of brain growth, some of these sulcal roots automatically from virtual spatial normalization in the merge with each other and form different patterns dependfuture (Talairach and Tournoux, 1988). Nodes are de- ing on the subjects. The more usual patterns correspond to scribed by their size, minimal and maximal depth, gravity the usual sulci. In our opinion, some clues on these sulcal center localization, and mean normal. Links of type rt and root fusions can be found in the depth of the sulci (Fig. 5). rp are described by their length, extremity localizations, A model of these sulcal roots derived from our anaminimal and maximal depth, and mean direction. Links of tomical research has been used to label 26 right hemitype r are described by their size and the localization of spheres. This model shares striking similarities with the C Fig. 2. A subset of the final structural representation.

6 82 D. Riviere ` et al. / Medical Image Analysis 6 (2002) model recently proposed by Lohmann and von Cramon used to train the local experts leading to the inference of a (1998, 2000). This new type of anatomical model, how- global probability distribution; a test base of five brains is ever, requires further validations before being properly used to stop the training before over-learning; and finally, a used by neuroscientists. Therefore, the results described in generalization base of five brains is used to assess the the following have been obtained after a conversion of this actual recognition performance of the system. We encourfine grain labelling to the standard (Ono et al., 1990), age the reader to study Figs. 3 and 4, which give an idea of which will allow comparisons to other group s works. This the variability of the folding patterns. Of course, our choice leads to a list of 60 names for each hemisphere, manual labelling can not be considered as a gold standard where each name represents one standard sulcus or one and could be questioned by other anatomists. It has to be usual sulcus branch. noted, however, that a lot of information used to perform The 26 right hemispheres have been randomly separated the manual recognition is concealed in the depth of the into three bases: a learning base made up of 16 brains is sulci. Fig. 3. A survey of the labelled database. The three first rows present nine brains of the learning base, the fourth row presents three brains of the test base, and the last row presents three brains of the generalization base. Each color labels one entity of the anatomical model. Several hues of the same color are used to depict different roots or stable branches of one given sulcus. For instance, color codes of main frontal sulci are: 2 reds5central, 5 yellows5precentral, 3 greens5superior, 2 blues5intermediate, 4 purples5inferior, 8 blues5lateral fissure, red5orbitary, rose5marginal, yellow5 transverse.

7 D. Riviere ` et al. / Medical Image Analysis 6 (2002) Fig. 4. A survey of the labelled database which provides an idea of inter-individual variability in areas not covered by Fig. 3. Fig. 5. The sulcal root model in the temporal lobe. Left: a virtual representation where only sulcal roots are drawn on an adult size brain. It should be noted that this configuration can not be observed during brain growth because some sulcal root merge occur before the apparition of the whole set of roots. Right: a usual actual anatomical configuration at adult age where potentially buried gyri are indicated by a double arrow.

8 84 D. Riviere ` et al. / Medical Image Analysis 6 (2002) The random graph and Markovian models During our past experiments (Mangin et al., 1995b), the system potentials were designed as simple ad hoc func- The structural model underlying our pattern recognition tions. Various failures of the global system rapidly led us to system is a random graph, namely a structural prototype the firm belief that the complex dependencies between the whose vertices and relations are random variables (Fig. 6). pattern descriptors used to code sulcus shapes require a In order to allow vertices and relations of the random more powerful approach. Neural nets represent an efficient graph to yield sets of several nodes or several links in approach to the approximation of complex functions. individual brains, the classical definition proposed by Hence, each potential of the current system is now given Wong and You (1985) is extended by substituting the by a multi-layer perceptron (MLP) (Rumelhart et al., monomorphism by a homomorphism (Mangin et al., 1986). Each perceptron may be considered as a virtual 1995b). The recognition process can be formalized as a expert of some local anatomical feature. The choice of labelling problem, where a label is associated with each MLPs mainly stems from the fact that they have led to a vertex of the random graph. Such a labelling of the nodes lot of successful applications, which implies that a large of an individual graph, indeed, is equivalent to a homo- amount of information on their behaviour can be found in morphism towards the random graph. Hence, the sulcus literature (Orr and Muller, 1998). recognition problem amounts to searching for the labelling Two families of potentials are designed. The first family with the maximum probability. For the application to the evaluates the sulcus shapes and the second family evaluright hemisphere described in this paper, the random graph ates the spatial relationships of pairs of neighboring sulci. is made up of 60 vertices corresponding to the 60 names Hence, the first family is associated with the random graph used to label the database. vertices, while the second family is associated with the Once a new brain has been virtually oriented according random graph relations. Each potential depends only on to a universal frame, in our case the Talairach system, the the labels of a localized set of nodes, which corresponds to cortical area where one specific sulcus can be found is the Markov field interaction clique (Geman and Geman, relatively small. This localization information can already 1984). For a given individual graph, each clique correlead to interesting recognition results (Le Goualher et al., sponds to the set of nodes included in the field of view of 1998, 1999). Localization, however, is largely insufficient the underlying expert (Fig. 7). For sulcus experts, this field to perform a complete recognition. Indeed, a lot of of view is defined from the learning base as a paraldiscriminating power only stems from contextual infor- lelepiped of the Talairach coordinate system. The paralmation. This situation has led us to introduce a Markovian lelepiped is the bounding box of the sulcus instances in the framework (Mangin et al., 1995b) to design an estimator of learning base computed along the inertia axes of this the probability distribution associated with the random instance set. graph. This framework provides us with a very flexible For sulcus pair relationship experts, the field of view is model: Gibbs distributions relying on local potentials simply the union of the fields of view of the two related (Geman and Geman, 1984). These potentials are inferred sulcus experts. Pairs of sulci are endowed with an expert if from the learning base. They embed interactions between at least 10% of the learning base brains possess an actual the labels of neighboring nodes. These interactions are link between the two related sulci in the structural reprerelated to contextual constraints that must be adhered to in sentation (cf. Fig. 2). For the model of the right hemiorder to get anatomically plausible recognitions. sphere described in this paper, this rule leads to 205 Fig. 6. A small random graph (left) and one of its realizations, an attributed relational graph representing one individual cortical folding pattern (right). a i k represent vertices of the random graph, while bij represent relations. ai realizations are sets of nodes (SS i ) representing folds, while bij realizations are sets k of links (r ) representing junctions, plis de passage and gyri. ij

9 D. Riviere ` et al. / Medical Image Analysis 6 (2002) Fig sulcus experts and 205 relationship experts are inferred from the learning base. Each expert evaluates the labelling of the nodes included in its field of view. relationship experts. The whole system, therefore, is made are fully connected, and neurons of the second layer are all up of a congregation of 265 experts, each expert e being in connected to the output neuron. charge of a potential P e. The expert single opinions are The numbers of neurons in each layer are the following: 1 gathered by the Gibbs distribution ] Z exph 2 oe P e(l)j, ( ) for sulcus experts and ( ) for which gives the likelihood of a global labelling l (Z is a relationship experts. Once again, this ad hoc choice stems normalization constant). Hence, the sulcus recognition from experiments with a few experts. While smaller amounts to minimizing the sum of all of the perceptron networks can lead to good results for some experts in outputs. charge of simple pattern recognition tasks, other experts seem to require large networks to perform their task correctly. Anyway, since our training process includes a protection against overlearning, our system is robust to 5. Expert training over-proportioned networks. Expert inputs are vectors of descriptors of the ana MLP topology and pattern coding tomical feature for which the expert is responsible. These descriptors constitute a compressed code of sulcus shapes The choice of MLP topology (number of layers, number and relationships. The descriptors are organized in conof neurons in each layer, connectivity) is known to be a sistent blocks which feed only one subset of the first difficult problem without general solution. For our applica- hidden layer. Sulcus shapes are summarized by 27 detion where a lot of different MLPs have to be designed, an scriptors and sulcus relationships by 23 descriptors. These adaptive strategy may have been the best choice. In the descriptors are computed from a small part of the graph following, however, only two different topologies will be corresponding to one single label (sulcus) or one pair of used: one for sulcus experts and one for relationship labels (relationship). A few Boolean logical descriptors are experts. The small size of our learning database, indeed, used to inform of the existence of a non-empty instance of prevents a consistent adaptive strategy to be developed. some anatomical entity (sulcus, junction with the hemi- Different experiments with a few experts have led us to sphere hull, actual link between two sulci,... ). Integer endow our perceptrons with two hidden layers and one syntactic descriptors and continuous semantic descriptors output neuron. are inferred from the attributes and the structure of the The first hidden layer is not fully connected to the input subgraph to be analyzed. For instance, the size of a sulcus layer, which turned out to improve the generalization is the sum of the sizes of all the nodes endowed with this power of the networks used by our application. This first sulcus label. A detailed description of all the procedures hidden layer is split into several blocks fed by a specific used to compute these descriptors is largely beyond the subset of inputs with a related meaning (see Fig. 7). This scope of this paper (Riviere, ` 2000). The different blocks of sparse topology largely reduces the number of weights to descriptors are the following (the (N 2 N9) notation means be estimated by the backpropagation algorithm used to that N input neurons corresponding to N descriptors feed train the MLPs (Rumelhart et al., 1986). Some experi- N9 first hidden layer neurons). ments beyond the scope of this paper have shown that this choice usually leads to a restricted area of low potential Sulcus experts (good patterns), which was not necessarily the case with a fully connected network. Finally, first and second layers Empty instance (1 36). One Boolean which

10 86 D. Riviere ` et al. / Medical Image Analysis 6 (2002) feeds all the first layer neurons informs on the existence of surrounding the correct pattern domain, the previous an instance of the sulcus. numbers are drawn from a distribution which favorizes small numbers. For the same reason, in half of the cases, Localization (10 16). Gravity center, extremities the nodes to be added to the sulcus have to be chosen of the junction with brain hull, one Boolean informs on the randomly only among the nodes linked with a node of the existence of a hull junction. sulcus correct pattern. For the rest of the cases, they are chosen randomly among all the nodes of the clique Orientation (7 8). Mean normal, mean direction Unfortunately, the blind generation of counterexamples of the junction with brain hull, one Boolean informs on the sometimes yields ambiguous patterns. For instance, if a existence of a hull junction. small branch is added to a correct sulcus pattern, the resulting example may still be considered as valid from the Size (3 10). Sulcus size, minimal and maximal anatomical point of view. If many such ambiguous examgeodesic depth. ples are presented to the expert as incorrect, the result of the training is unpredictable (like for a human expert) Syntax (6 10). Number of connected compocontinuous distance between the correct example and the This difficulty is overcome via the use of a rough nents using all links or only contact links; number of non-contact links between contact related connected comis made up of the variation of the total sulcus size added to generated counterexample. For sulcus experts, this distance ponents, maximal gap between these components (continuthe variation of the number of connected components ous); number of internal links of buried gyrus type. multiplied by an ad hoc weighting factor. For relationship Relationship experts experts, a similar distance is defined by the variation of the total size of the links implied in the relationship. These Empty instance (1 32). One Boolean which distances are used to choose the output taught to the feeds all the first layer neurons informs on the existence of perceptron during the training. The ad hoc rule used to 1 compute this output is: output 5 ]]] a link between both sulci. d. Hence, small 1 1 exps2] 100d distances lead to intermediate outputs (0.5) while larger First sulcus (3 6). Sulcus size, number of con- distances lead to the highest output (1). This means that nected components, number of such components implied the output taught for ambiguous examples is lower than for in actual links between the sulci. the reliable counterexamples, which clarifies the situation. Indeed, if the domain of correct examples is corrupted by Second sulcus (3 6). Same as above for second some ambiguous counterexamples, the network will lead to sulcus. an average output below 0.5, while the surrounding domain full of reliable counterexamples will lead to an Semantic description (11 14). Minimal distance average output largely over 0.5. Moreover, the choice of a between the sulci; semantic attributes of the contact link continuous taught output creates some slope into the (junction or buried gyrus): namely junction localization, landscape of the potential provided by the expert, which mean direction, distances between the contact point and helps the final minimization used for sulcus recognition to the closest sulcus extremities, respective localization of the find its way towards a deep minimum. sulci, and angle between sulcus hull junctions. The balancing of the number of counterexamples versus the number of correct examples presented during the Syntactic description (3 6). Number of contact training is another important point. The training is made up points, number of links of buried gyrus type between the of iterations over the learning base. Therefore, while new sulci, minimal depth of such links (continuous). counter-examples are generated during each iteration, the correct examples are always the same, which may be 5.2. Training problematic with a small base. It should be noted, however, that the situation is not so critical because counter- The supervised training of the experts relies on two examples include some anatomical knowledge. Therefore, kinds of examples. Correct examples extracted from the since few counter-examples can be located in the middle of learning base must lead to the lowest output, namely the the correct pattern domain, a good generalization can be null value. Counterexamples are generated from correct obtained from only a few correct examples. We have examples through random modifications of some labels of verified with a few experts that the crucial parameter is in the clique nodes. For examples of a sulcus l, two random fact the ratio between correct examples and close counternumbers are used: na nodes are added to the sulcus correct examples. Here close refers to a threshold on the taught pattern while nd nodes are deleted. For examples of a output (0.75). When the correct/ close ratio is too low, relationship (l 1,l 2), the two sulci are corrupted simul- the error function driving the backpropagation algorithm taneously. In order to obtain a good sampling of the space leads to forbid any area of low potential. When this ratio is

11 D. Riviere ` et al. / Medical Image Analysis 6 (2002) Fig. 8. A survey of the training of the central sulcus (top) and intermediate precentral sulcus (bottom) experts. The x-axis represents the number of iterations over the learning base, while the y-axis represents the perceptron output between 0 and 1. Dark (blue) points represent correct examples, light (green) points close counter-examples, and middle grey (red) points remote counter-examples. The output taught to the perceptrons are 0 for correct examples, about 0.75 for close counter-examples, and 1 for remote counter-examples. The first chart shows the evolution of the perceptron output for the learning base during the training. The second chart is related to the output for the test base. The third chart presents the evolution of the mean error on the test base. A consistent increase of this criterion corresponds to overlearning beginning. too high, the low potential area is too large and includes a Finally, the backpropagation algorithm requires a criterlot of incorrect patterns. The final ratio was tuned via ion to stop the training when a sufficient learning has been experiments with a few experts: two close counter-exam- done and to avoid over-learning. This criterion is comples and seven remote counter-examples for one correct puted from a second base, the test base. The stop criterion example. A high number of remote counter-examples was is made up of the sum of two mean errors computed, chosen to get bounded low potential areas. respectively, for correct examples and for remote counter- A last point to be solved is related to counter-examples examples of the test base. The learning is stopped when without instance of the underlying sulci (no node with the this criterion presents a consistent increase (Fig. sulcus label). If the sulcus always exists in the learning 8(bottom)) or after a maximum number of iterations (Fig. base, the taught output is This output is lower than 8(top)). the highest output because a missing identification is more The minimum value of the stop criterion is used to get a acceptable than a wrong answer. When the sulcus does not measure of confidence in the expert opinion. This measure exist in all the brains of the learning base, the taught output is used to weight the output of this expert during the is related to its frequency of appearance: output recognition process. It should be noted that some experts 0.25 ]]]]]. This ad hoc rule allows us first to deal 1 1 exp(240( f 2 0.9)) are endowed with a very low confidence, for instance when with situations where the sulcus is missing erroneously in a the sulcus shape is so variable that its identification stems few brains ( f. 0.9). In that case the taught output is close only from the identification of the surrounding sulci. to the previous situation (0.75). Second, for sulcus existing Another explanation to the various levels of confidence is only in a subset of the learning base, the taught output the small size of the learning base which is not sufficient to tends to be 0.5, which means that the empty instance can learn all the variations of the sulcus patterns. Base size only be challenged by good instances. effects on learning are explored in Figs. 9 and 10 for the Fig. 9. Evolution of the central sulcus expert output on the test base during training on three different bases obtained by permutations. The color code is the same as in Fig. 8. The learning base includes 16 brains and the test base includes five brains. Left: perfect generalization. Middle and right: two brains are problematic. This dependence on the choice of the learning base means that the learning base size is too small.

12 88 D. Riviere ` et al. / Medical Image Analysis 6 (2002) Fig. 10. Evolution of the central sulcus expert output on the test base during training on 6 different configurations of learning and test bases. The chart title give the respective number of brains in each base. Top: The three charts show that the learning base size has to be sufficient to get good generalization. Bottom: The three last charts show that increasing the test base size provides a quicker observation of overlearning. This effect, however, is very difficult to predict with small learning bases. central sulcus expert. It should be noted that since this the unknown label which has no related expert. The sulcus shape is especially stable, these base size effects are minimization is performed using a stochastic algorithm bound to be more important for most of the other experts. inspired by the simulated annealing principle (Geman and Fortunately, since the final recognition of a sulcus results Geman, 1984). This algorithm is made up of two kinds of from the opinion of several experts, the global system is iterations. already rather efficient in spite of the weaknesses of While most iterations correspond to the standard apindividual experts. proach (Geman and Geman, 1984), one in ten follows a different algorithm dedicated to our application. These special iterations aim at overcoming bad situations where 6. Results the minimization is lost very far from the correct labelling area. Such situations which occur during the high tempera- The 265 expert training process has been performed on a ture period are problematic because a number of node network of ten standard workstations and lasts about 24 h. transitions are required to reach a domain where the global Of course, while this high training cost was cumbersome energy embeds meaningful anatomical information. A fast during the tuning of the system, it is acceptable in a annealing schedule, however, has not enough time to find standard exploitation situation. Indeed, this training is done such paths only by chance. Therefore, the standard algoonly one time, or more exactly each time we decide to rithm gets trapped in a non-interesting local minimum. enlarge the learning database. This problem is solved when one considers more sophisticated transitions involving several nodes simultaneously, 6.1. Minimization which is very usual to the field of stochastic minimization (Tupin et al., 1998). The two kinds of iterations are as The sulcus recognition process itself consists of the follows: minimization of the energy made up of the weighted sum Standard iterations browse the nodes in a random order. of the expert outputs. For practical reasons, expert outputs For each node, the energy variations DU(l) correare first scaled between 21 and 1 and then multiplied by a sponding to transitions towards each possible label l are confidence measure. During the minimization, each node computed. Then, the actual transition is drawn from a label is chosen in a subset of the sulcus list corresponding distribution where each label l is endowed with the to the expert fields of view which include this node, plus e probability ]]] 2DU(l )/T, where T is a temperature paramo e 2DU(l )/T l

13 D. Riviere ` et al. / Medical Image Analysis 6 (2002) eter. This temperature parameter is multiplied by 0.98 directly applicable. A solution could stem from theoretical at the end of each global iteration, which is the usual works dedicated to sophisticated samplers used to study scheduling of simulated annealing. Gibbs field phase transitions (Swendsen and Wang, 1987). Special iterations are made up of two successive loops Indeed, these samplers are applied to study the fractal over the labels in a random order. For each label l, the nature of the Ising model realizations at critical tempera- erasing loop computes the energy variations induced ture, which implies the use of connected component either by replacing l by the unknown label globally, or related transitions. Anyway, theoretical proofs are usually for only one l related connected component. Ana- related to very slow annealing schedule. Therefore, our tomically speaking, this operation aims at challenging implementation which performs only about 400 global globally the current identification of the underlying iterations has to be considered as a heuristics (Fig. 11). For sulcus. Such transitions may imply a lot of nodes the following results, the minimization lasts about 2 h on a simultaneously and therefore be very difficult to find conventional workstation. While an optimized implementaduring the standard iteration process. The actual transi- tion is planned in order to achieve a significant speed-up, it tion is drawn from a distribution similar to the standard should be noted that the manual labelling work is even iteration one. The identification loop envisages for slower. each label l all the transitions that replace unknown Because of the heuristical nature of our minimization, label by l for one unknown related connected com- the improvements resulting from the special iterations can ponent. This loop takes advantage of the fact that only be assessed on a statistical basis, using different suspicious identifications have been erased by the brains. This algorithm, indeed, is bound to be trapped in a previous loop, which means that a whole sulcus may be local minimum because of the highly non-convex nature of identified at a time in the unknown space even if it is the underlying energy. Implementations with or without made up of a lot of nodes. special iterations have been compared during a one shot Our implementation of the simulated annealing principle is experiment on the 26 brains (Fig. 12). The implementation beyond the framework of standard convergence proofs including special iterations led to a lower energy for 18 (Geman and Geman, 1984). The transitions considered brains. Further studies should be done to assess the during the special iterations, indeed, are not reversible influence of the frequency of occurrence of special iterabecause they depend on the current graph labelling. Hence, tions. This first experiment led also to the interesting the usual Markov chain approach to the proof is not observation that the nature of the global energy landscape Fig. 11. Global energy behaviour during simulated annealing. The special iterations lead to large energy decreases during the high temperature period, while their influence becomes imperceptible later.

14 90 D. Riviere ` et al. / Medical Image Analysis 6 (2002) global measure is weighted by its size (the number of voxels of the underlying skeleton; Mangin et al., 1995a). The mean recognition rate on each of the three bases is proposed in Fig. 13. In order to check the reproducibility of the recognition process, the minimization has been repeated ten times with different initializations for one brain of each base (Fig. 14(left)). This experiment has shown that the recognition rate is related to the depth of the local minimum obtained by the optimization process. This result is confirmed by Fig. 14(right) which shows the recognition rates for the 52 minimizations of the experiment described in Fig. 12. This result tends to prove that the global energy corresponding to our recognition system Fig. 12. Final energy yielded by simulated annealing relative to energy of is anatomically meaningful, whatever the minimization the manual labelling. From left to right: 16 brains of the learning base, 5 difficulties. Therefore, the recognition rate could be easily brains of the test base, 5 brains of the generalization base. For each brain, improved if the best of several minimizations was kept as the square/ circle corresponds to an annealing including only standard the final result. iterations while the cross/ star corresponds to the complete scheme. The recognition rate obtained for the generalization base is 76%, which is very encouraging considering the varidepends on the base. Indeed, the differences between both ability of the folding patterns. As matters stand relative to minimizations is larger for the learning base than for the our understanding of this variability, it should be noted generalization base. This effect could be related to expert that numerous errors of the system correspond to amover-learning which creates deeper local minima for the biguous configurations. In fact, after a careful inspection of learning base. This could predict an easier minimization in the results, the neuroanatomist of our team often admits to generalization situations which could afford us to use a preference for the automatic labelling. Moreover, the faster implementations. automatic system often corrects flagrant errors due to the cumbersome nature of the manual labelling. Such disagree Recognition rate ments between manual and automatic labelling explain the surprising observation that whatever the underlying base, A global measure is proposed to assess the correct the final energy yielded by the minimization is lower than recognition rate. This measure corresponds to the propor- the energy related to the manual labelling. The base tion of cortical folds correctly identified according to the influence on the results call for an enlargement of the manual labelling. The contribution of each node to this learning base and of the test base, which was foreseeable Fig. 13. Node number, recognition rate, energy of the manual labelling (U base), and energy of the automatic labelling Uannealing for each base. Fig. 14. Left: Recognition rate relative to final energy for ten different minimizations applied on one brain of each base. Right: Recognition rate relative to final energy for the 52 minimizations of Fig. 12. Squares/ circles denote standard annealing, while cross/ stars denote complete annealing.

15 D. Riviere ` et al. / Medical Image Analysis 6 (2002) and should improve the results. We also plan to develop a maps of the localization of the main structures in a system using several experts for each anatomical entity in standard space may be used. In our opinion, however, such order to get a better management of the coding of the constraints could lead to a much less versatile system structural variability (Riviere ` et al., 1998). This work will unable to react correctly to outlier brains. In fact, large include automatic adaptation of the topology of the neural scale experiments will have to be performed in order to networks to each expert. find the good balance between localization and structural The pattern recognition system described is this paper constraints. includes many ad hoc solutions that are sometimes difficult to justify. The design of a computational system actually dealing with the problem of the sulcus recognition, how- 7. Conclusion ever, leads necessarily to such choices. Providing a discussion for each problematic point would be too cumbersome A number of approaches relying on the deformable atlas to be interesting. A few of them, however, have to be paradigm consider that anatomical a priori knowledge can addressed. be completely embedded in iconic templates. While this point of view is very powerful for anatomical structures The oversampling requirement presenting low inter-individual variability, it seems in- We have mentioned during the description of the sufficiently versatile to deal with the human cortical preprocessing stage that a requirement to get a good anatomy. This observation has led several teams to investibehaviour of our method was an oversampling of the gate approaches relying on higher levels of representation. anatomical structures to be identified. While this oversam- All these approaches rely on a preprocessing stage which pling is usually reached at the level of standard sulci, we extracts sulcal related features describing the cortical are not satisfied yet with the sulcus split into sulcal roots. topography. These features can be sulcal points (Chui et Therefore, a new segmentation related to mean curvature al., 1999), sulcal lines inferred from skeletons (Royackkers of the cortical surface has been recently proposed in order et al., 1999; Caunce and Taylor, 1999), topologically to use detection of the sulcal wall deformations induced by simple surfaces (Mangin et al., 1995), 2D parametric buried gyri (Cachia et al., 2001). Moreover, a study of the models of sulcal median axis (Le Goualher et al., 1997; brain growth process from antenatal to adult age has been Vaillant and Davatzikos, 1997; Zeng et al., 1999), crest triggered in order to improve the current sulcal root point lines (Declerck et al., 1995; Manceaux-Demiau et al., of view. Finally, we plan to add into our random graph 1997) or cortex depth maxima (Lohmann and von Cramon, model a new kind of anatomical entities corresponding to 1998; Rettmann et al., 1999). In our opinion, this direction the merge of two smaller entities. This would allow us to of research can lead further than the usual deformable consistently tackle the recognition of the sulcal roots template approach. In fact these two types of work should although some of the buried gyri are not always detected. be merged in the near future. It has to be understood, however, that some of the challenging issues about cortical The recognition rate anatomy mentioned in the introduction require new neuro- The choice of a global measure to assess the recognition science results to be obtained. As such, image analysis rate gives a very crude idea of the results. This measure, teams addressing this kind of research must be responsible however, is sufficient to study the behaviour of the for providing neuroscientists with new tools in order to framework relative to the size of the databases. The speed-up anatomical and brain mapping research. Our cumbersome sulcus by sulcus analysis underlying this system is used today to question the current understanding global measure may be found in (Riviere, ` 2000). In our of the variability and to help the emergence of better opinion, however, the small size of the learning base anatomical models. Various direct applications have been should lead to analyze these results with great cautions. developed in the fields of epilepsy surgery planning and Another weakness of our recognition rate is the fact that brain mapping. the same sulcus segmentation is used both for manual and automatic labelling. This is clearly a bias in favour of our method. Therefore, in the future, more careful studies will have to be performed using several segmentations for each References brain using for instance several MR scans. Considering the Bajcsy, R., Broit, C., Matching of deformed images. In: IEEE cumbersome manual identifications, however, we have Proceedings of the Sixth International Conference on Pattern Recognidecided to postpone that kind of validation studies until the tion, October, pp discovery of a reliable detector of buried gyri. Cachia, A., Mangin, J.-F., Riviere, ` D., Boddaert, N., Andrade, A., Kherif, F., Sonigo, P., Papadopoulos-Orfanos, D., Zilbovicius, M., Poline, J.-B., Bloch, I., Brunelle, F., Regis, J., A mean curvature based The probability map primal sketch to study the cortical folding process from antenatal to While our framework has been intentionally developed adult brain. In: Proceedings of MICCAI 01, LNCS, Utrecht. Springer, with weak localization constraints, accurate probability Berlin, in press.

16 92 D. Riviere ` et al. / Medical Image Analysis 6 (2002) Cachier, P., Mangin, J.F., Pennec, X., Riviere, ` D., Papadopoulos-Orfanos, Mangin, J.-F., Coulon, O., Frouin, V., Robust brain segmentation D., Regis, J., Ayache, N., Multipatient registration of brain MRI using histogram scale-space analysis and mathematical morphology. using intensity and geometric features. In: Proceedings of MICCAI 01, In: Proceedings of MICCAI 98, MIT, LNCS Springer, Berlin, LNCS, Utrecht. Springer, Berlin, in press. pp Caunce, A., Taylor, C.J., Using local geometry to build 3D sulcal Ono, M., Kubik, S., Abernethey, C.D., Atlas of the Cerebral Sulci. models. In: Proceedings of IPMI 99, LNCS Springer, Berlin, pp. Thieme, New York Orr, G., Muller, K.-R., Neural Networks: Tricks of the Trade. Chui, H., Rambo, J., Duncan, J., Schultz, R., Rangarajan, A., LNCS Springer, Berlin. Registration of cortical anatomical structures via robust 3D point Poupon, C., Mangin, J.-F., Clark, C.A., Frouin, V., Regis, J., LeBihan, D., matching. In: Proceedings of IPMI 99, LNCS Springer, Berlin, Bloch, I., Towards inference of human brain connectivity from pp MR diffusion tensor data. Medical Image Analysis 5, Collins, D.L., Le Goualher, G., Evans, A.C., Non-linear cerebral Regis, J., Mangin, J.-F., Frouin, V., Sastre, F., Peragut, J.C., Samson, Y., registration with sulcal constraints. In: Proceedings of MICCAI 98, Generic model for the localization of the cerebral cortex and LNCS 1496, pp preoperative multimodal integration in epilepsy surgery. Stereotactic Declerck, J., Subsol, G., Thirion, J.-P., Ayache, N., Automatic Funct. Neurosurg. 65, retrieval of anatomical structures in 3D medical images. In: Proceed- Rettmann, M.E., Xu, C., Pham, D.L., Prince, J.L., Automated ings of CVRMed, LNCS 905, pp segmentation of sulcal regions. In: Proceedings of MICCAI 99, Friston, K.J., Ashburner, J., Frith, C.D., Poline, J.B., Heather, J.D., Cambridge, UK, LNCS Springer, Berlin, pp Frackowiak, R.S.J., Spatial registration and normalization of Riviere, ` D., Automatic learning of the variability of the patterns of images. Hum. Brain Mapping 2, the human cortical folding. PhD thesis (in French), Evry University. Geman, S., Geman, D., Stochastic relaxation, Gibbs distributions, Riviere, ` D., Mangin, J.-F., Martinez, J.-M., Chavand, F., Frouin, V., and the bayesian restoration of images. IEEE Proc. Am. Med. Inst. 4 Neural network based learning of local compatibilities for segment (6), grouping. In: Proceedings of SSPR 98, LNCS Springer, Berlin, Hellier, P., Barillot, C., Cooperation between local and global pp approaches to register brain images. In: Proceedings of IPMI 01, Royackkers, N., Desvignes, M., Fawal, H., Renenu, M., Detection University of California, Davis, in press. and statistical analysis of human cortical sulci. NeuroImage 10, 625 Le Goualher, G., Barillot, C., Bizais, Y., Modeling cortical sulci 641. using active ribbons. Int. J. Pattern Recognit. Artific. Intell. 11 (8), Rumelhart, D.E., Hinton, G.E., Williams, R.J., Learning Internal Representations By Error Backpropagation. MIT Press, Cambridge, Le Goualher, G., Collins, D.L., Barillot, C., Evans, A.C., Automatic MA, pp identification of cortical sulci using a 3D probabilistic atlas. In: Swendsen, R.H., Wang, J.S., Nonuniversal critical dynamics in Proceedings of MICCAI 98, MIT, LNCS Springer, Berlin, pp. Monte Carlo simulations. Phys. Rev. Lett. 58, Talairach, J., Tournoux, P., Co-planar Stereotaxic Atlas of the Le Goualher, G., Procyk, E., Collins, D.L., Venugopal, R., Barillot, C., Human Brain. Thieme, New York. Evans, A.C., Automated extraction and variability analysis of Thompson, P., Toga, A.W., Detection, visualization and animation sulcal neuroanatomy. IEEE Trans. Med. Imaging 18 (3), of abnormal anatomic structure with a deformable probabilistic brain Likar, B., Viergever, M., Pernus, F., Retrospective correction of atlas based on random vector field transformation. Medical Image MR intensity inhomogeneity by information minimization. In: Analysis 1 (4), Proceedings of MICCAI 2000, LNCS Springer, Berlin, pp. Thompson, P.M., Woods, R.P., Mega, M.S., Toga, A.W., Mathe matical/ computational challenges in creating deformable and prob- Lohmann, G., von Cramon, Y., Automatic detection and labelling of abilistic atlases of the human brain. Hum. Brain Mapping 9, the human brain cortical folds in MR data sets. In: Proceedings of Tupin, F., Maitre, H., Mangin, J.-F., Nicolas, J.-M., Pechersky, E., ECCV, pp Linear feature detection on SAR images: application to the road Lohmann, G., von Cramon, D.Y., Automatic labelling of the human network. IEEE Geosci. Remote Sens. 36 (2), cortical surface using sulcal basins. Medical Image Analysis 4 (3), Vaillant, M., Davatzikos, C., Finding parametric representations of the cortical sulci using active contour model. Medical Image Analysis Malandain, G., Bertrand, G., Ayache, N., Topological segmentation 1 (4), of discrete surfaces. Int. J. Comput. Vis. 10 (2), Vailland, M., Davatzikos, C., Hierarchical matching of cortical Manceaux-Demiau, A., Mangin, J.-F., Regis, J., Pizzato, O., Frouin, V., features for deformable brain image registration. In: Proceedings of Differential features of cortical folds. In: Proceedings of IPMI 99, LNCS Springer, Berlin, pp CVRMED/ MRCAS, Grenoble, LNCS Springer, Berlin, pp. Watson, J.D.G., Myers, R., Frackowiak, R. et al., Area V5 of the human cortex: evidence from a combined study using positron Mangin, J.-F., Entropy minimization for automatic correction of emission tomography and magnetic resonance imaging. Cerebral intensity nonuniformity. In: Proceedings of MMBIA, South Carolina, Cortex 3, pp Welker, W., Why does the cerebral cortex fissure and fold. Cerebral Mangin, J.-F., Frouin, V., Bloch, I., Regis, J., Lopez-Krahe, J., 1995a. Cortex 8B, From 3D MR images to structural representations of the cortex Wong, A.K.C., You, M.L., Entropy and distance of random graph topography using topology preserving deformations. J. Math. Imaging with application to structural pattern recognition. IEEE Proc. Am. Vis. 5 (4), Med. Inst. 7, Mangin, J.-F., Regis, J., Bloch, I., Frouin, V., Samson, Y., Lopez-Krahe, J., Zeng, X., Staib, L.H., Schultz, R.T., Tagare, H., Win, L., Duncan, J.S., 1995b. A Markovian random field based random graph modelling the A new approach to 3D sulcal ribbon finding from MR images. human cortical topography. In: Proceedings of CVRMed, Nice, LNCS- In: Proceedings of MICCAI 99, Cambridge, UK, LNCS Spring Springer, Berlin, pp er, Berlin, pp Mangin, J.-F., Regis, J., Frouin, V., Shape bottlenecks and conservative flow systems. In: Proceedings of MMBIA, San Francisco, pp

Coordinate-based versus structural approaches to brain image analysis

Coordinate-based versus structural approaches to brain image analysis Artificial Intelligence in Medicine 30 (2004) 177 197 Coordinate-based versus structural approaches to brain image analysis J.-F. Mangin a,b,*, D. Rivière a,b, O. Coulon a,d, C. Poupon a,b, A. Cachia a,b,

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

NEURO M203 & BIOMED M263 WINTER 2014

NEURO M203 & BIOMED M263 WINTER 2014 NEURO M203 & BIOMED M263 WINTER 2014 MRI Lab 1: Structural and Functional Anatomy During today s lab, you will work with and view the structural and functional imaging data collected from the scanning

More information

Numerical Field Extraction in Handwritten Incoming Mail Documents

Numerical Field Extraction in Handwritten Incoming Mail Documents Numerical Field Extraction in Handwritten Incoming Mail Documents Guillaume Koch, Laurent Heutte and Thierry Paquet PSI, FRE CNRS 2645, Université de Rouen, 76821 Mont-Saint-Aignan, France Laurent.Heutte@univ-rouen.fr

More information

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Evaluation of a New Method for Measuring the Internet Degree Distribution: Simulation Results

Evaluation of a New Method for Measuring the Internet Degree Distribution: Simulation Results Evaluation of a New Method for Measuring the Internet Distribution: Simulation Results Christophe Crespelle and Fabien Tarissan LIP6 CNRS and Université Pierre et Marie Curie Paris 6 4 avenue du président

More information

What mathematical optimization can, and cannot, do for biologists. Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL

What mathematical optimization can, and cannot, do for biologists. Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL What mathematical optimization can, and cannot, do for biologists Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL Introduction There is no shortage of literature about the

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

How To Find Local Affinity Patterns In Big Data

How To Find Local Affinity Patterns In Big Data Detection of local affinity patterns in big data Andrea Marinoni, Paolo Gamba Department of Electronics, University of Pavia, Italy Abstract Mining information in Big Data requires to design a new class

More information

Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features

Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features Remote Sensing and Geoinformation Lena Halounová, Editor not only for Scientific Cooperation EARSeL, 2011 Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with

More information

Performance Level Descriptors Grade 6 Mathematics

Performance Level Descriptors Grade 6 Mathematics Performance Level Descriptors Grade 6 Mathematics Multiplying and Dividing with Fractions 6.NS.1-2 Grade 6 Math : Sub-Claim A The student solves problems involving the Major Content for grade/course with

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain

More information

Binary Image Scanning Algorithm for Cane Segmentation

Binary Image Scanning Algorithm for Cane Segmentation Binary Image Scanning Algorithm for Cane Segmentation Ricardo D. C. Marin Department of Computer Science University Of Canterbury Canterbury, Christchurch ricardo.castanedamarin@pg.canterbury.ac.nz Tom

More information

Chapter 4: Artificial Neural Networks

Chapter 4: Artificial Neural Networks Chapter 4: Artificial Neural Networks CS 536: Machine Learning Littman (Wu, TA) Administration icml-03: instructional Conference on Machine Learning http://www.cs.rutgers.edu/~mlittman/courses/ml03/icml03/

More information

Visual Structure Analysis of Flow Charts in Patent Images

Visual Structure Analysis of Flow Charts in Patent Images Visual Structure Analysis of Flow Charts in Patent Images Roland Mörzinger, René Schuster, András Horti, and Georg Thallinger JOANNEUM RESEARCH Forschungsgesellschaft mbh DIGITAL - Institute for Information

More information

Classroom Tips and Techniques: The Student Precalculus Package - Commands and Tutors. Content of the Precalculus Subpackage

Classroom Tips and Techniques: The Student Precalculus Package - Commands and Tutors. Content of the Precalculus Subpackage Classroom Tips and Techniques: The Student Precalculus Package - Commands and Tutors Robert J. Lopez Emeritus Professor of Mathematics and Maple Fellow Maplesoft This article provides a systematic exposition

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the

More information

6.2.8 Neural networks for data mining

6.2.8 Neural networks for data mining 6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Course: Model, Learning, and Inference: Lecture 5

Course: Model, Learning, and Inference: Lecture 5 Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract Probability distributions on structured representation.

More information

2. MATERIALS AND METHODS

2. MATERIALS AND METHODS Difficulties of T1 brain MRI segmentation techniques M S. Atkins *a, K. Siu a, B. Law a, J. Orchard a, W. Rosenbaum a a School of Computing Science, Simon Fraser University ABSTRACT This paper looks at

More information

DATA QUALITY DATA BASE QUALITY INFORMATION SYSTEM QUALITY

DATA QUALITY DATA BASE QUALITY INFORMATION SYSTEM QUALITY DATA QUALITY DATA BASE QUALITY INFORMATION SYSTEM QUALITY The content of those documents are the exclusive property of REVER. The aim of those documents is to provide information and should, in no case,

More information

Measurement with Ratios

Measurement with Ratios Grade 6 Mathematics, Quarter 2, Unit 2.1 Measurement with Ratios Overview Number of instructional days: 15 (1 day = 45 minutes) Content to be learned Use ratio reasoning to solve real-world and mathematical

More information

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS B.K. Mohan and S. N. Ladha Centre for Studies in Resources Engineering IIT

More information

NEURAL NETWORKS A Comprehensive Foundation

NEURAL NETWORKS A Comprehensive Foundation NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments

More information

Effect of Using Neural Networks in GA-Based School Timetabling

Effect of Using Neural Networks in GA-Based School Timetabling Effect of Using Neural Networks in GA-Based School Timetabling JANIS ZUTERS Department of Computer Science University of Latvia Raina bulv. 19, Riga, LV-1050 LATVIA janis.zuters@lu.lv Abstract: - The school

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation

More information

The Problem of Scheduling Technicians and Interventions in a Telecommunications Company

The Problem of Scheduling Technicians and Interventions in a Telecommunications Company The Problem of Scheduling Technicians and Interventions in a Telecommunications Company Sérgio Garcia Panzo Dongala November 2008 Abstract In 2007 the challenge organized by the French Society of Operational

More information

Data quality in Accounting Information Systems

Data quality in Accounting Information Systems Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania

More information

Practical Graph Mining with R. 5. Link Analysis

Practical Graph Mining with R. 5. Link Analysis Practical Graph Mining with R 5. Link Analysis Outline Link Analysis Concepts Metrics for Analyzing Networks PageRank HITS Link Prediction 2 Link Analysis Concepts Link A relationship between two entities

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information

Norbert Schuff Professor of Radiology VA Medical Center and UCSF Norbert.schuff@ucsf.edu

Norbert Schuff Professor of Radiology VA Medical Center and UCSF Norbert.schuff@ucsf.edu Norbert Schuff Professor of Radiology Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics 2012, N.Schuff Course # 170.03 Slide 1/67 Overview Definitions Role of Segmentation Segmentation

More information

2.2 Creaseness operator

2.2 Creaseness operator 2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

(Refer Slide Time: 01:52)

(Refer Slide Time: 01:52) Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This

More information

Problem of the Month Through the Grapevine

Problem of the Month Through the Grapevine The Problems of the Month (POM) are used in a variety of ways to promote problem solving and to foster the first standard of mathematical practice from the Common Core State Standards: Make sense of problems

More information

How To Check For Differences In The One Way Anova

How To Check For Differences In The One Way Anova MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

More information

PLAANN as a Classification Tool for Customer Intelligence in Banking

PLAANN as a Classification Tool for Customer Intelligence in Banking PLAANN as a Classification Tool for Customer Intelligence in Banking EUNITE World Competition in domain of Intelligent Technologies The Research Report Ireneusz Czarnowski and Piotr Jedrzejowicz Department

More information

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

Comparison of K-means and Backpropagation Data Mining Algorithms

Comparison of K-means and Backpropagation Data Mining Algorithms Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and

More information

Offline 1-Minesweeper is NP-complete

Offline 1-Minesweeper is NP-complete Offline 1-Minesweeper is NP-complete James D. Fix Brandon McPhail May 24 Abstract We use Minesweeper to illustrate NP-completeness proofs, arguments that establish the hardness of solving certain problems.

More information

The Basics of Graphical Models

The Basics of Graphical Models The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

More information

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

More information

Power Prediction Analysis using Artificial Neural Network in MS Excel

Power Prediction Analysis using Artificial Neural Network in MS Excel Power Prediction Analysis using Artificial Neural Network in MS Excel NURHASHINMAH MAHAMAD, MUHAMAD KAMAL B. MOHAMMED AMIN Electronic System Engineering Department Malaysia Japan International Institute

More information

Segmentation of building models from dense 3D point-clouds

Segmentation of building models from dense 3D point-clouds Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute

More information

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES 136 TER 4. INDUCTION, GRHS ND TREES 4.3 Graphs In this chapter we introduce a fundamental structural idea of discrete mathematics, that of a graph. Many situations in the applications of discrete mathematics

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

For example, estimate the population of the United States as 3 times 10⁸ and the

For example, estimate the population of the United States as 3 times 10⁸ and the CCSS: Mathematics The Number System CCSS: Grade 8 8.NS.A. Know that there are numbers that are not rational, and approximate them by rational numbers. 8.NS.A.1. Understand informally that every number

More information

An Empirical Study of Two MIS Algorithms

An Empirical Study of Two MIS Algorithms An Empirical Study of Two MIS Algorithms Email: Tushar Bisht and Kishore Kothapalli International Institute of Information Technology, Hyderabad Hyderabad, Andhra Pradesh, India 32. tushar.bisht@research.iiit.ac.in,

More information

Protein Protein Interaction Networks

Protein Protein Interaction Networks Functional Pattern Mining from Genome Scale Protein Protein Interaction Networks Young-Rae Cho, Ph.D. Assistant Professor Department of Computer Science Baylor University it My Definition of Bioinformatics

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time

More information

Simulation in design of high performance machine tools

Simulation in design of high performance machine tools P. Wagner, Gebr. HELLER Maschinenfabrik GmbH 1. Introduktion Machine tools have been constructed and used for industrial applications for more than 100 years. Today, almost 100 large-sized companies and

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target

More information

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

More information

Using UML Part Two Behavioral Modeling Diagrams

Using UML Part Two Behavioral Modeling Diagrams UML Tutorials Using UML Part Two Behavioral Modeling Diagrams by Sparx Systems All material Sparx Systems 2007 Sparx Systems 2007 Page 1 Trademarks Object Management Group, OMG, Unified Modeling Language,

More information

Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009

Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009 Content Area: Mathematics Grade Level Expectations: High School Standard: Number Sense, Properties, and Operations Understand the structure and properties of our number system. At their most basic level

More information

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow , pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

More information

(Refer Slide Time: 2:03)

(Refer Slide Time: 2:03) Control Engineering Prof. Madan Gopal Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 11 Models of Industrial Control Devices and Systems (Contd.) Last time we were

More information

Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

More information

CHAPTER 7 GENERAL PROOF SYSTEMS

CHAPTER 7 GENERAL PROOF SYSTEMS CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes

More information

Social Media Mining. Data Mining Essentials

Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

More information

Using Lexical Similarity in Handwritten Word Recognition

Using Lexical Similarity in Handwritten Word Recognition Using Lexical Similarity in Handwritten Word Recognition Jaehwa Park and Venu Govindaraju Center of Excellence for Document Analysis and Recognition (CEDAR) Department of Computer Science and Engineering

More information

Clustering & Visualization

Clustering & Visualization Chapter 5 Clustering & Visualization Clustering in high-dimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to high-dimensional data.

More information

D A T A M I N I N G C L A S S I F I C A T I O N

D A T A M I N I N G C L A S S I F I C A T I O N D A T A M I N I N G C L A S S I F I C A T I O N FABRICIO VOZNIKA LEO NARDO VIA NA INTRODUCTION Nowadays there is huge amount of data being collected and stored in databases everywhere across the globe.

More information

131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10

131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10 1/10 131-1 Adding New Level in KDD to Make the Web Usage Mining More Efficient Mohammad Ala a AL_Hamami PHD Student, Lecturer m_ah_1@yahoocom Soukaena Hassan Hashem PHD Student, Lecturer soukaena_hassan@yahoocom

More information

Analysis of an Artificial Hormone System (Extended abstract)

Analysis of an Artificial Hormone System (Extended abstract) c 2013. This is the author s version of the work. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purpose or for creating

More information

Chapter 10: Network Flow Programming

Chapter 10: Network Flow Programming Chapter 10: Network Flow Programming Linear programming, that amazingly useful technique, is about to resurface: many network problems are actually just special forms of linear programs! This includes,

More information

How I won the Chess Ratings: Elo vs the rest of the world Competition

How I won the Chess Ratings: Elo vs the rest of the world Competition How I won the Chess Ratings: Elo vs the rest of the world Competition Yannis Sismanis November 2010 Abstract This article discusses in detail the rating system that won the kaggle competition Chess Ratings:

More information

Gerry Hobbs, Department of Statistics, West Virginia University

Gerry Hobbs, Department of Statistics, West Virginia University Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit

More information

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS UDC: 004.8 Original scientific paper SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS Tonimir Kišasondi, Alen Lovren i University of Zagreb, Faculty of Organization and Informatics,

More information

Lecture 6. Artificial Neural Networks

Lecture 6. Artificial Neural Networks Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm

More information

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express

More information

Answer Key for California State Standards: Algebra I

Answer Key for California State Standards: Algebra I Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.

More information

Problems often have a certain amount of uncertainty, possibly due to: Incompleteness of information about the environment,

Problems often have a certain amount of uncertainty, possibly due to: Incompleteness of information about the environment, Uncertainty Problems often have a certain amount of uncertainty, possibly due to: Incompleteness of information about the environment, E.g., loss of sensory information such as vision Incorrectness in

More information

Modeling Guidelines Manual

Modeling Guidelines Manual Modeling Guidelines Manual [Insert company name here] July 2014 Author: John Doe john.doe@johnydoe.com Page 1 of 22 Table of Contents 1. Introduction... 3 2. Business Process Management (BPM)... 4 2.1.

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Forecasting Trade Direction and Size of Future Contracts Using Deep Belief Network

Forecasting Trade Direction and Size of Future Contracts Using Deep Belief Network Forecasting Trade Direction and Size of Future Contracts Using Deep Belief Network Anthony Lai (aslai), MK Li (lilemon), Foon Wang Pong (ppong) Abstract Algorithmic trading, high frequency trading (HFT)

More information

Neural Networks and Support Vector Machines

Neural Networks and Support Vector Machines INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines

More information

A Data Mining Analysis to evaluate the additional workloads caused by welding distortions

A Data Mining Analysis to evaluate the additional workloads caused by welding distortions A Data Mining Analysis to evaluate the additional workloads caused by welding distortions Losseau N., Caprace J.D., Aracil Fernandez F., Rigo P. ABSTRACT: This paper presents a way to minimize cost in

More information

A Tool For Active FLEET Management and Analysis of Activities of a Snow PlowING and a Road Salting Fleet

A Tool For Active FLEET Management and Analysis of Activities of a Snow PlowING and a Road Salting Fleet A Tool For Active FLEET Management and Analysis Activities a Snow PlowING and a Road Salting Fleet Rok Strašek, Tina Vukasović Abstract - Current economic crisis combined with increasing fuel costs rises

More information

Predict Influencers in the Social Network

Predict Influencers in the Social Network Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, lyzhou@stanford.edu Department of Electrical Engineering, Stanford University Abstract Given two persons

More information

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations

More information

Random graphs with a given degree sequence

Random graphs with a given degree sequence Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.

More information

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 11 Block Cipher Standards (DES) (Refer Slide

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

EXPLORING SPATIAL PATTERNS IN YOUR DATA

EXPLORING SPATIAL PATTERNS IN YOUR DATA EXPLORING SPATIAL PATTERNS IN YOUR DATA OBJECTIVES Learn how to examine your data using the Geostatistical Analysis tools in ArcMap. Learn how to use descriptive statistics in ArcMap and Geoda to analyze

More information

Recognition of Handwritten Digits using Structural Information

Recognition of Handwritten Digits using Structural Information Recognition of Handwritten Digits using Structural Information Sven Behnke Martin-Luther University, Halle-Wittenberg' Institute of Computer Science 06099 Halle, Germany { behnke Irojas} @ informatik.uni-halle.de

More information

. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns

. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns Outline Part 1: of data clustering Non-Supervised Learning and Clustering : Problem formulation cluster analysis : Taxonomies of Clustering Techniques : Data types and Proximity Measures : Difficulties

More information

Measuring the Performance of an Agent

Measuring the Performance of an Agent 25 Measuring the Performance of an Agent The rational agent that we are aiming at should be successful in the task it is performing To assess the success we need to have a performance measure What is rational

More information

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification

More information

COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS

COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS Iris Ginzburg and David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Science Tel-Aviv University Tel-A viv 96678,

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

Determining optimal window size for texture feature extraction methods

Determining optimal window size for texture feature extraction methods IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec

More information