High Performance Computing (HPC) in Medical Image Analysis (MIA) at the Surgical Planning Laboratory (SPL)

Size: px
Start display at page:

Download "High Performance Computing (HPC) in Medical Image Analysis (MIA) at the Surgical Planning Laboratory (SPL)"

Transcription

1 High Performance Computing (HPC) in Medical Image Analysis (MIA) at the Surgical Planning Laboratory (SPL) Ron Kikinis, M.D., Simon Warfield, Ph.D., Carl-Fredrik Westin, Ph.D. Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA Abstract This paper outlines some of the usage and applications for HPC in the medical image analysis field. As opposed to traditional HPC work that focuses on developing new optimization strategies and improving the implementation of existing environments, the work reported here is focussing strongly on the utilization of HPC technology (both commercial and public domain) in an application driven clinical and research environment. Work performed at the Surgical Planning Laboratory (SPL) of Brigham and Women's Hospital and Harvard Medical School in Boston is used as an example of this type of activity. 1 Introduction 1.1 General Postprocessing of digital diagnostic imaging data allows the extraction of quantitative measures and the generation of complex visualization. This can be used for monitoring of disease progression, diagnosis, preoperative planning and intraoperative guidance and monitoring. Postprocessing adds value to medical images. However, successful postprocessing requires complex and optimized processing systems. Development of such systems is challenging and requires multiyear interdisciplinary collaborations for a successful outcome. In order to achieve working and robust solutions, both the medical problems and the data acquisitions have to be carefully selected and optimized. Research in Medical Image Analysis (MIA) is currently pursued by a small, international community (this community has an annual meeting which is called MICCAI Conference). Some of the results of this research have already found their way into commercial products. Several additional concepts are currently being developed into products. Many new applications are being researched right now and hold promise for a continuing stream of successful clinical applications. 1.2 High Performance Computing (HPC) in MIA MIA is not a traditional field of application for HPC. The cultures of HPC and MIA are very different. In HPC, the application people are typically accessing the supercomputer centers and have to have their own funding for the work on the HPC machines. Some centers donate the CPU time to the applications scientists, but the effort to develop the HPC application is typically funded by the application. In contrast to this, MIA is typically happening in a medical environment, because access to physicians and patients is the most limiting step in this type of research. This requires computer scientists to spend multiple years in cross disciplinary training and building of collaborations with medical personnel. In this environment, unscheduled access to resources is of critical importance. Because the field is relatively young, a lot of the initial work did not require very sophisticated and computationally expensive algorithms. The maturation of the field is now changing this. Furthermore, the amount of data produced by the different scanners was relatively moderate until recently. Several developments are now moving the field of MIA towards the use of HPC techniques. Specifically, this is because of the increase of computational requirements, of the data volume and of the intensity of electronic utilization of the data. Table 1 provides an overview of trends in these three areas. During the development phase of an algorithm, the only aspect that is of relevance are the computational demands of a given algorithm. Since it takes several years to develop working analysis systems in this field, it becomes necessary to make assumptions on the hardware that will be at the right price performance point at the time where the software system will be ready for widespread clinical use. This is one of the motivations of

2 Table 1 : Evolution of demands on the computational resources. Volume of diagnostic imaging data Computational demands by algorithms Digital accessibility MB per patient scan MB in routine, 1-4 GB in research mode Supervised classification statistical Proof of concept demonstration projects, not economic self adaptive iterative statistical classification economic in certain niche applications: OR's remote reading Gigabytes per scan in routine scans self adaptive classification modulated by anatomical and pathological knowledge standard everywhere using HPC hardware and software for research in this field. 2 Evolution of algorithms In most cases algorithm evolution in medical image analysis begins with a concrete imaging problem. A computer scientist will assemble a library of test cases and will begin development of a processing pipeline or network that will solve the problem. The initial testing is typically done on small sized subsets of the data so that testing cycles are relatively short. The code generated is typically "messy" and not optimized. Once that the basic algorithm concept is developed, it is necessary to further validate and test the robustness of the algorithm by applying it to a larger number of cases. This typically requires adding I/O libraries and improvement of the speed by adding algorithmic accelerations through approximations and speed-ups. Finally, the inner loops get optimized and threaded. Where necessary, cluster oriented capabilities are added. When the algorithm is proven in such a fashion, the usage shifts to routine use. At this point, it becomes necessary to run batches of jobs with the data and algorithm. In addition to the basic mechanism discussed above, there is another evolution where the scientists that are developing new algorithms become used to having higher performance available. This will result in algorithms that will have more robustness and selfadaptive behavior at the expense of higher computational requirements. In some cases, the progression will continue to implementation on dedicated hardware and finally, revert back to generic desktop machines at significantly higher performance levels (due to the time that has passed between the initial development and the widespread deployment). Some processing pipelines are not suited for hardware acceleration, so not every algorithm goes through all of these steps, but many do. The following text outlines some of the usage and applications for HPC in the medical image analysis field. As opposed to traditional HPC work that focuses on developing new optimization strategies and improving the implementation of existing environments, the work reported here is focussing strongly on the utilization of HPC technology (both commercial and public domain) in an application driven clinical and research environment. Work performed at the Surgical Planning Laboratory (SPL) of Brigham and Women's Hospital and Harvard Medical School in Boston is used as an example of this type of activity. The text is organized as follows: The material and methods section summarizes the environment in which the reported work was done. The results section uses a number of concrete examples to demonstrate the principles outlined in the material and methods section and finally in the discussion the results and the concepts introduced are put into a wider framework. Table 2: Migration of algorithms from research to clinical routine. Stage Hardware Code Data User Concept development Workstation Simple code Small subsets (2D or subvolumes) Computer scientist Small batch testing Workstation/Server Optimization/Threadi ng Clinical studies Server / Cluster MPI, LSF Larger series/ clinical research / routine Full data, small series Computer scientist / Application scientist Application scientist/ Technicians First products Dedicated Hardware Specific API Clinical routine Technicians "Massmarket" products Workstation Highly optimized code Routine use Technicians

3 3 Material and Methods 3.1 The environment at the SPL Because of the necessity of unscheduled access to large resources which can occur in some situations, the traditional mode of operation of HPC groups doesn't work for MIA applications. The extreme example is surgical planning, where surgical procedures sometimes occur on very short notice without the possibility of scheduling CPU time on a compute server. In such a situation it has to be possible to interrupt other, properly scheduled, activities. This is in stark contrast to the way most computer centers work. This is one of the justifications to build up an HPC environment inside a hospital. Such an environment would then allow the majority of the development cycle (see Table 2) to take place in that research lab. The personnel of such a facility has to be experienced in medical image analysis and in the different HPC techniques. Typically, this requires academic computer scientists with multi-year exposure to this type of research Workstations The main part of the computer resources in the SPL is based on workstations from Sun Microsystems. Currently, over 65 workstations and compute servers are available in the lab: among them are 8 SPARC-10, 12 SPARC-20, 5 UltraSPARC-1, 1 UltraSPARC-2 and 3 UltraSPARC-30. The majority of our 450 GB of harddisk are connected to a fileserver which is a 4 CPU SPARCcenter 1000 with 256 MB of RAM. In addition, the SPL has a dedicated web server and firewall system General Purpose Network The SPL has its own dedicated network that is separated from the department and is designed to accommodate the high bandwidth requirements of the image processing activities. The three sites of the SPL are connected by multi-mode fiber which was installed by the Brigham and Women's Hospital. Each site has a Xyplex Enterprise Hub with an 655Mbps backplane and seven slots that can hold either conventional Ethernet modules or switched Ethernet modules with 8 ports or ATM modules with two-port daughter cards for either fiber based or UTP 5 based 155Mbps transfer rates. The hubs can provide managed, switched, bridging and routing services to the ports. The hubs, consulting and engineering services for installation and integration into the existing networking environment were donated by Whitaker-Xyplex. Using this dedicated hardware, the three sites are connected by OC-3 ATM fiber optic backbones running at 155Mbps. Each of the three sites has switched Ethernet and conventional Ethernet available for high throughput performance. The general purpose network is currently being updated to an architecture based on a Gigabit Ethernet backbone and switched 100 base T Fast Ethernet 3.2 HPC infrastructure Legacy hardware In the SPL, a massively parallel CM-200 (Thinking Machines Corporation) with 16k processors has been used for interactive segmentation and fast volume rendering of image volumes since A Model 3 Power Visualization System (PVS) from IBM with 32 i860 and 512MB Ram has been used for some of the segmentation work since The programs used on both systems were developed in the SPL and allowed gaining some experience with parallelization and HPC issues in our domain. However, in both cases, the code had to be optimized to the hardware for the necessary performance and is therefore not portable. This experience formed the basis for the current work Current HPC hardware In September of 1996 the SPL acquired two Sun Microsystems Ultra HPC 5000s. Each of these machines is currently configured with MHz UltraSPARC CPU's and two gigabytes of shared memory and is equipped with a graphics accelerator, approximately 150 gigabytes of local harddisks in diskarrays connected via fiber channel. At the end of 1997, the lab acquired an additional Sun HPC 6000 compute server equipped with MHz UltraSPARC-II CPU's, five gigabytes of ram, and a 120 gigabyte local hard disk connected via fiber channels. For the purpose of cluster computing, the three machines are interconnected via a hybrid network. The main network is Fast Ethernet. The two HPC 5000s are also connected via a private SCI network. The HPC 6000 is in the near future connected to the 5000's via a 1Gbps Ethernet. Cluster traffic will be consolidated onto a pure SCI network, using a high performance four-port SCI switch. Most of the results reported here were obtained with the network described above. We are now in the process of establishing a higher performance dedicated networking infrastructure for the cluster computing environment. This network will be based on a SCI switch which will connect all three Sun HPC servers in a low latency network with gigabit performance.

4 Figure 1: Segmentation paradigm 3.3 Software / Developers environment We are primarily using the Sun Workshop environment and GNU compilers and editors for most of the initial code development. For distributed memory applications, running on our cluster of SMPs, we use MPI. The development environment consists of the Sun Parallel Development Environment (PDE). We use Sun's RTE (Run Time Environment) to execute MPI jobs. This software is part of Sun's HPC 2.0 package. 3.4 The future of MIA HPC in the context of service offices for PACS In the medical field, there is a trend toward storage and viewing of digital imaging data (such as CT and MRI) on workstations. Workstation display capability and priceperformance has reached a point where this begins to make economic sense even in clinical environments, and the importance of picture archiving and communication systems (PACS) is growing. Postprocessing will become increasingly easy as the data is available digitally and the required computational performance is available. However, many of the analysis systems are still too complex for use by casual operators. It is therefore likely that the postprocessing will be outsourced to large processing centers, with data transmission over the net. Such centers will operate in a concept comparable to clinical laboratories, where blood samples are centrally processed in large facilities. For one of our applications, MS, we are in the process of establishing such a center today. 4 Results 4.1 Overview The core of algorithmic activity for medical image processing is centered around the issue of segmentation and registration. We approach the segmentation problem as a control theory design problem. We seek to understand images with signal processing techniques that enhance important features of the image, and have designed a feedback control system to generate the desired segmentation (Figure 1). Results of the segmentation get aligned to other data acquisitions and to the actual patient during procedures [Jolesz, 1997]. The components of this segmentation approach are image acquisition, adaptive filtering, statistical classification and explicit anatomical modeling. Finally, the results of the segmentation will be visualized using different rendering methods. The majority of the processing modules in our processing approach are computationally demanding and require some form of parallelization for optimal usability. Table 3 below lists the different modules and give some overview over the parallelization strategies used. 4.2 Feature Enhancement In order to reduce the noise level and to emphasise image structures of interest, the image data is filtered prior to segmentation. We have clinical applications involving segmentation of MR images which routinely uses anisotropic diffusion for enhancing the gray-level image

5 Table 3: Overview over algorithms and their parallelization and application. Idea Method Parallelization Feature Enhancement Enhance selected characteristics Classification k-nn, Parzen window EM Linear Registration Intra-subject Inter-subject Nonlinear Registration Visualization Surface Model Generation Classify an unknown voxel based on prototypes Increase robustness of statistical approach through adaptive behavior Use inherent contrast similarity to align image Measure mismatch of alignment of two subjects by counting the number of voxel labels that don't match. Use rubber-sheet transform to align two data sets from different subjects. Generate highly optimized triangle surface models structure Volume prior to segmentation [Gerig, 1992]. By Rendering Direct visualization of volume data without prior processing Spatial and frequency domain filtering: convolutions Nonparametric supervised statistical classification ([Duda, 1973], [Cover, 1967], [Cover, 1968],[Clarke, 1993],[Warfield, 1996],[Friedman, 1975] Iterates between statistical classification and intensity prediction/correction [Wells, 1996] Requires entropy and joint entropy computation [Wells, 1996a] Multiresolution alignment using XOR function [Warfield, 1998] Multiresolution approach with fast local similarity measurement, and a simplified regularization model Pipeline of marching cubes [Lorensen, 1987], triangle reduction [Schroeder, 1992], and triangle smoothing [Taubin, 1995] Shear warp algorithm [Ylä-Jääski, 1997], [Lacroute, 1994], [Saiviroonporn, 1998]) SMP and MPI style for Fourier transforms [Frigo, 1997]and convolutions Each voxel treated separately ([Friedman, 1975]. SMP for core, MPI Classification step as in k-nn, intensity correction [Wells, 1986]: convolutions SMP Joint histogram computation, parallelized by computing the histogram of data chunks. Joint entropy from the histogram SMP, MPI First data is resampled, then a tissue label count to calculate registration, MPI, SMP Low pass filter, upsampling, downsampling, arithmetic operations, solve systems of equations. SMP Distributed computation of triangle models for each structure of a data set (up to 300). LSF Render subvolumes separately. SMP MPI Noise reduction [Gerig, 1992], removal of partial volume artefacts [Westin, 1997] Classification in different areas of the body [Kikinis 1992], [Huppi 1998], [Warfield, 1995, Warfield, 1996] Classification primarily of brain MRI [Morocz, 1995], [Kikinis, 1997], [Iosifescu, 1997] Registration of slices for multichannel analysis [Huppi 1998, Nakajima, 1997] Initial alignment for template driven segmentation [Warfield, 1996] Template driven segmentation [Warfield, 1996] Visualization for surgical applications and for presentation purposes [Ozlen, 1998], [Chabrerie, 1998],[Chabrerie, 1998a], [Kikinis, 1996] Visualize data before segmentation, interactive editing

6 smoothing along structures and not across, the noise level can be reduced without severely blurring the image. For this purpose we use a parallel implementation of the anisotropic diffusion algorithm running on our CM200. Recently, a multi threaded adaptive filtering scheme was implemented in C which takes advantage of parallelism available on our Sun SMP machines. This algorithm is based on steerable filters which conforms to the local structure adaptively [Granlund, 1995]. One of the applications that uses this filtering technology is segmentation of bone from Computed Tomography (CT) images, see section 4.9 below. Convolution involves multiplication and summation of filter kernel coefficients with signal voxels, over the local area that the filter supports. Since the result in each voxel can be calculated independently, this calculations can be done in parallel and thus the speedup for convolution is linear with the number of CPUs. It should be noted, however, for large filter kernels, i.e. kernels (e.g. 9x9x9 voxels) it is in general more efficient to calculate the result of a convolution using the Discrete Fourier Transform (DFT) (Unless the calculations are performed on a massively parallel machine). When performing filtering using the Fourier transform we take advantage of a software package developed LCS at MIT, "FFTW" [Frigo, 1997]. FFTW is a C subroutine library for performing the Discrete Fourier Transform (DFT) in one or more dimensions. We run FFTW under Solaris using POSIX threads. Performing a Fourier Transform using SMP an typical sized CT data set, 512x512x100 voxels, on our 20 CPU SUN HPC server takes about 1 minute. An MPI version of the FFTW routines is available which makes it possible to perform the FFT calculation on distributed memory machines in addition to shared-memory architectures. 4.3 Classification k-nn Classification Classification is a technique for the segmentation of medical images. The k-nearest Neighbor (k-nn) classification rule is a technique for nonparametric supervised pattern classification. An excellent description of k-nn classification and its properties is provided in [Duda, 1973]. Each voxel is labeled with a tissue class selected from a set of possible tissue classes. The possible tissue classes is described, in k-nn classification, by selecting a set of typical voxels (prototypes) for each tissue type. Voxels of an unknown class are then classified by comparing the voxel intensity characteristics with those of the prototypes, and selecting the class that occurs most frequently amongst the k nearest prototypes. The classification of each voxel is independent of neighboring voxels. As such the most straightforward parallelization strategy is to apply the k-nn classification rule to several voxels at the same time, up to the number of CPUs available for computation. Speedup is linear with the number of CPUs. Our SMP implementation uses a POSIX threads based `work pile' to distribute the classification of chunks of the voxel data to each of the CPUs. We have applied this technique to the segmentation of MR scans of patients with brain tumor, MR scans of baby brains, MR scans of the knee and MR scans of patients with multiple sclerosis EM Segmentation EM segmentation is a method that iterates between conventional tissue classification and the estimation of intensity inhomogeneity to correct for imaging artifacts. We model intra- and inter-scan MRI intensity inhomogeneities with a spatially-varying factor called the gain field that multiplies the intensity data. The application of a logarithmic transformation to the intensities allows the artifact to be modeled as an additive bias field [Wells, 1996]. If the gain field is known, then it is relatively easy to estimate tissue class by applying a conventional intensity-based segmenter to the corrected data. Similarly, if the tissue classes are known, then it is straightforward to estimate the gain field by comparing predicted intensities and observed intensities. It may be problematic, however, to determine either the gain or the tissue type without knowledge of the other. We have shown that it is possible to estimate both using an iterative algorithm (that converges in five to ten iterations, typically). The EM algorithm consists of a conventional classification step, an intensity prediction step, and an intensity correction step. Classification is parallelized by classifying different voxels simultaneously, as above. The same is done with the intensity prediction step. Intensity correction primarily involves low pass filtering. This is implemented with a parallel unity gain filtering step [Wells, 1986], that costs only two multiplies per voxel per axis, independent of filter length. We have applied this technique to the segmentation of MR scans of patients with schizophrenia [Iosifescu, 1997], multiple sclerosis [Kikinis, 1997], and normal volunteers [Morocz,1997]. 4.4 Linear Registration Linear registration algorithms typically are used for the purpose of aligning several data sets of the same subject that contain complementary information (e.g. a CT and an MRI scan), see Figure 2. Another application is the initial alignment, as a preliminary step before non-linear

7 registration, of a canonical data set and the data from a specific subject (see Figure 3). The different algorithms that have been published in the literature [Warfield, 1998], [West, 1997], typically trade off speed (e.g. through feature extraction or subsampling) and robustness and capture range (e.g. simulated annealing). We have developed two different forms of linear registration, one suited to inter-patient registration and one suited to intra-patient registration. 4.5 Intra-patient Registration The algorithm described here works with the concept of subsampling of the gray scale data for speed-up. Entropy calculations are performed in a histogram feature space. The algorithm is relatively fast, doesn't require any preprocessing of the data. However, it requires a start pose that is relatively close to the final results. In practical terms, the operator will pick three paired landmarks and the algorithm will then calculate an alignment to subvoxel accuracy. Alignment is assessed by using inherent contrast similarity to directly measure image alignment. The algorithm requires entropy and joint entropy computation. Mutual information is defined in terms of entropy [Wells, 1996]. The first term is the entropy in the reference volume. The second term is the entropy of the part of the test volume into which the reference volume projects. It encourages transformations that project the reference volume into complex parts of the test volume. The third term, the (negative) joint entropy of the reference and test volume, contributes when they are functionally related. We use a histogram based density estimate for the joint entropy estimation. The joint histogram computation, parallelized by dividing data into chunks, computing the histogram of each chunk, and then adding the histograms together. The joint entropy can then be calculated by a loop over the histogram. Registration of acquisitions with different contrasts into multichannel data sets for better segmentation and visualization. Examples include image analysis in neonates (T2/PD - SPGR,) and surgical planning (MRA, SPECT, fmri, MRI, CT, see Table 3 for references), see Figure Interpatient Registration The basic assumption of the MI algorithm described in the previous section is that we have two data sets containing the same structure. In a situation where we are trying to align two data sets of different subjects, this assumption is not. The metrics used conventionally for assessing the quality of alignment of two data sets, such as error minimization in a sparse subsampling of a data set, do not work satisfactorily. Dense feature comparison turns out to be more robust than sparse feature comparison in such situations. Parallelization is used to allow the speed-up of dense feature comparisons, making the application of this technique practical in a clinical context, see Figure 3. The idea is to generate segmentations of the patient scans to be aligned, measure mismatch of alignment by counting the number of voxels that don't match, and find the transform that minimizes the mismatch. Figure 4 Figure 2: Merging of two data acquisitions from the same subject. Left: Brain surface, viewed from the back, as extracted from a T1 weighted MR scan. Right: enlarged detail from the center of the image. The vessels, which are represented in a dark color have been merged, using the MI algorithm. They fit very well into the existing grooves in the brain surface.

8 Figure 4: Atlas to patient initial alignment. Even after successful execution of the registration algorithm there is a remnant of misalignment which is due to difference in shape. shows a flowchart describing the registration process. Each scan to be registered is classified and a multiresolution pyramid of the classified scan is constructed. An initial alignment is selected as either the identity transform or the transform identified with the process described below. For each level of the pyramid, the optimum alignment is determined by minimizing the mismatch of corresponding tissue labels. Each evaluation of this mismatch is computed in parallel on a cluster of SMPs. The evaluation of a particular transform involves the comparison of aligned data with a two step process. First the moving data set is resampled into the frame of the stationary data set. Second is the voxelwise comparison of label values. Each of these steps can be parallelized by carrying out the operations simultaneously on some voxels in the frame of the stationary data set. This algorithm was initially developed for interpatient registration such as the initial alignment for template driven segmentation (TDS). TDS is used in many applications such as the quantitative analysis of MS, brain development, schizophrenia, rheumatoid arthritis. More recently, we have begun to utilize the algorithm for intrapatient alignment, if a large capture range was needed. Figure 3: Flowchart describing the registration process. The imaging data is converted to a multiresolution pyramid of tissue labels, and at each level of the pyramid a registration transform is estimated. The computation of the mismatch between tissue labels is implemented on a cluster of SMPs.

9 Figure 5: Example for the use of nonlinear registration of surgical procedures. The left image show a large tumor adjacent to the brain area that controls the motoric functions of the body (so called motor cortex). The right image shows the corticospinal tract extracted from a digital brain atlas [Kikinis, 1996] and warped into the expected position. This can be used by a surgeon to assess the location of critical areas during the planning of a surgical procedure. 4.7 Non-linear registration Local shape differences between data sets can be identified by finding a 3D deformation field that alters the coordinate system of one data set to maximize the similarity of local intensities with the other. Elastic matching aims to match a template, describing the anatomy expected to be present, to a particular patient scan so that the information associated with the template can be projected directly onto the patient scan on a voxel to voxel basis. The template can be an atlas of normal anatomy (deterministic or probabilistic), or it can be a scan from a different modality, or it can be a scan from the same modality. The template can contain information typically found in anatomical textbooks, but unlike normal textbooks, can be linked to any form of relevant digital information. For elastic matching, we are using an approach that is similar in concept to the work reported by Bajcsy and Kovacic [Bajcsy and Kovacic, 1989], and [Collins and Evans, 1992]. However, our implementation uses several different algorithmic improvements to speed up the processing including a multiresolution approach with fast local similarity measurement, and a simplified regularization model for the elastic membrane [Dengler and Schmidt, 1988]. Our matcher, which is implemented in C, is based on essentially the same algorithm as that implemented by Dengler in APL, with a few improvements and modifications. Our implementation uses algorithms parallelized for SMP such as low pass filter upsampling and downsampling, arithmetic operations, and solving systems of equations. Nonlinear registration gets primarily used for incremental alignment in TDS, following the linear alignment step. It is an integral part of the TDS processing network. In addition, non-linear registration has a role in intrapatient registration, where the patients anatomy has moved (change of position), see Figure Visualization Surface model generation To visualize the surface of structures by simulating light reflection requires generation of models by segmentation. The process of segmentation of the data into binary label maps and application of a surface model generation pipeline consisting of the marching cubes algorithm for triangle model generation, followed by triangle decimation and triangle smoothing to reduce triangle count. The algorithm is parallelized by distributed computation of triangle models for each structure of a data set. Efficient triangle model generation has been used for the visual verification of segmentation procedures, visualization for surgical planning and navigation [Nakajima, 1997], [Nakajima,1997a],[Kikinis,1996] Volume rendering Visualization of structures without the need for the extensive preprocessing required by the surface model approach can be done using volume rendering. This is of benefit if the structures to be visualized are constantly changing. Ray casting and shear warp algorithms are

10 Figure 6: Flowchart showing a network of processing modules. among the most popular approaches for volume rendering. Among others, we have used a shear-warp algorithm, implemented on a CM 200 [Saiviroonporn, 1998]. The algorithm is parallelized by applying the light transmission model simultaneously to different sections of the data associated with different screen pixels. Visualization of data before segmentation, visualize the magnitude of vector fields, interactive editing of volume data. 4.9 Clinical Applications of HPC in MIA In the majority of clinical cases, the algorithms that have been discussed are not deployed in isolation, but rather as an iterative network as displayed Figure Segmentation of scans of patients with multiple sclerosis A specific example for the application of the technology discussed above, is the quantitative analysis of MRI in patients with multiple sclerosis (MS). MS is a disease of the white matter of the brain and spine, which affects over a 300,000 patients in the US alone (ca. 1 per 1000). The patients suffer from this disease for decades. Typically, the patient will have periods of relatively little change which are followed by periods were they perceive more symptoms. While we don't have a good to treat the cause of the illness, there are potent medications available to treat individual breakouts of the disease. Unfortunately, these treatments have all severe side effects and can not be applied on a permanent basis. The physicians are therefore faced with the sometimes difficult decision, as to when to apply those treatments. MRI offers a direct visualization of the lesions caused by MS in the white matter of the brain. Quantitative measures based on analysis of MRI's of MS lesions are therefore an objective measure for the state of the disease. Such quantitative measures can be used to assess the progression or regression of the MS lesions under treatment. While radiologists have no trouble to recognize lesions, they use anatomical knowledge to identify the white matter and then to look at the changes in signal intensity in the white matter of the brain. The problem that image processing algorithms are facing is the fact that gray matter of the brain and white matter lesions have overlapping signal intensity properties. In order to achieve the same approach using image processing methods we need to generate a mask of the white matter. We have developed an algorithm that can achieve this by mapping a digital atlas from a normal subject into the patients [Warfield, 1996]. Figure 6 provides and overview of the network of processing modules that were used to obtain the automated segmentation which is displayed in Figure 7. For a detailed description see [Warfield, 1996]. First, the data gets filtered with a feature enhancing filter for noise reduction (see above [Gerig, 1992]). Then, an initial classification is performed, based on signal intensity properties using the EM algorithm (see above). A generic digital brain atlas [Kikinis, 1996] is warped into the patient data set using our non-linear warping algorithm (see above). A fast region growing algorithms uses different criteria to reliably identify the neocortical gray matter and the deep gray matter. Criteria for the neocortex are the probability

11 Figure 7: Single slice out of a brain covering MR acquisition in a patient with MS. The left image is a proton density weighted image, the center image is T2 weighted. The right image shows the results of the segmentation system: The skin and skull have been removed. The white matter of the brain is a bright yellow, the gray matter is a dark gray. The lesions are colorized in a reddish hue and the cerebro-spinal fluid is represented in two different shades of blue. This result was obtained automatically. of location of neocortical gray matter from the warped atlas, signal intensity properties from the classification step and the fact that the neocortical gray matter has the topology of a crumpled sheet. This allows to generate a mask of the white matter of the brain and to search for white matter lesions in that area. The final result of this processing system is a quantitative measure of disease progression derived from imaging data. To date, we have applied this system to over 1500 MRI scans Bone segmentation from CT Surgery of the musculoskeletal system is the fourth largest surgical procedure category. Computer aided image guidance for the planning prior to such surgical procedures and intraoperative navigation during intervention is of increasing importance. To successfully leverage the higher quality and quantity of imaging in minimally invasive scenarios, image information must be provided to the surgeon in a non-overwhelming manner, and without increasing the demand on the surgeon's navigation skills [Jolesz, 1992]. A prerequisite for fullfledged image guidance is the availability of accurate and robust methods for the segmentation of bone. The current state of the art for the identification of bone in clinical practice is by thresholding, a method which is simple and fast. Unfortunately, thresholding also produces many artifacts. This problem is particularly severe for thin bones such as in the sinus area of the skull. Another area where current techniques often fail is automatic, reliable and robust identification of individual bones, which requires precise separation of the joint spaces. 3D renderings of bone that are based on thresholding are currently available on most state-of-the-art CT consoles. As mentioned, using only thresholding leads to suboptimal and unsatisfying results in the vast majority of the cases. It is seldom possible to automatically separate the different bones adjacent to a given joint (e.g. femur and pelvis) or different fracture fragments. Thus, it is not possible to display the different anatomical or pathoanatomical components of a joint or fracture separately. A 3D visualization of a fractured wrist for example is useless unless each bone and each fragment can be displayed and evaluated separately. Similar problems exist in areas with very thin bone, such as the paranasal sinuses and around the orbits. However, the signal intensities are reversed in the two example scenarios (joint spaces are dark and thin bone is bright in CT data). Accordingly, 3D reconstruction for craniofacial surgery will benefit from improved segmentation results. Here we use local 3D structure for segmentation [Westin, 1997]. A tensor descriptor is estimated for each neighborhood, for each voxel in the data set [Knutsson, 1989]. The tensors are created from a combination of the outputs form a set of 3D quadrature filters. The shape of the tensors describe locally the structure of the neighborhood in terms of how much it is like a plane, a line, and a sphere. Traditional methods are based purely on gray-level value discrimination and have difficulties

12 Figure 8: Result of segmentation of CT. Top left shows a cut through the skull indicating the location of the slice of interest. Top right shows the gray-level image. Lower left shows the segmentation result from simple thresholding. Lower right shows the result from adaptive thresholding using local shape information. Note that many of the thin bone structures in the sinus areas which disappear when thresholding can be recovered [Westin, 1997]. in recovering thin bone structures due to so called partial voluming, a problem which is present in all such sampled data. The segmentation is based on to what degree a given 3D image neighborhood is planar. Sampling theory shows us that partial voluming artifacts can be overcome by resampling the image using a new signal basis - one which more closely resembles the signal. A tensor description formed by combining the outputs of a set of orientation selective quadrature filters provides us with just such a signal decomposition. In 3D, we can interpret three simple neighborhoods from the symmetric tensor: a planar, a linear and an isotropic case. This analysis, when used to perform adaptive thresholding of CT data has been very effective in recovering thin bone structure which would be otherwise lost, see Figure Speedups achieved Speedups obtained depend on many different issues: Whether an algorithm runs on a single machine (SMP) or on a cluster, whether the data can fit into memory and how intensive the communication aspects are. We have found that the combination of threading and MPI is currently not well supported by commercial environments (although this will change soon). Nevertheless, we are able to achieve close to linear speedup's in most cases. In some communication intense applications (e.g. interpatient linear registration algorithm, [Warfield, 1998]) we will need to wait for an improved network setup, before we can achieve optimal utilization of our cluster.

13 5 Discussion As was demonstrated in the examples taken from work at the SPL, HPC has a lot of uses in MIA. These uses go far beyond simple speed-up of algorithms that could be explored on workstations. We have discussed use of different parallelization approaches such as threading, cluster computing using MPI and more specialized implementations on SIMD and NUMA architectures. The individual algorithms can be put together to processing networks which can be run automatically or in a supervised fashion. We have made extensive use of HPC concepts and technology for many years, beginning with work on SIMD machines and progressing over time to portable code that incorporates both threading and clustering concepts. In our experience, HPC is an enabling technology that has allowed us to ask questions that would not have been possible otherwise. Beyond the use as a tool for research, HPC will become an important way to make the results of image processing techniques available to a larger percentage of the medical community. The development of concepts and algorithms in medical image analysis and their reduction into practice requires multiple years of work. Conversion of research results into commercial applications requires several years in addition. This is due to safety issues and regulatory requirements specific to the medical field. For commercial application it is typically necessary to rewrite software completely for compliance with FDA requirements for commercial applications. It takes over 5-10 years until the performance of a given HPC machine becomes available in a desktop computer. Doing the initial work in an HPC environment allows to research the potential of the desktop hardware of tomorrow with today tools. In this context, HPC has the potential to be an enabling technology for the development of crucial software tools for the analysis of imaging data. 6 Conclusion As we have discussed and demonstrated using examples, there are several ways how HPC can be used in medical image analysis. There is the global trend to computationally more demanding algorithms, more data and more people interested in this type of work (see Table 1) and there is the evolution of individual algorithms from conception to routine clinical use (see Table 2). Both of these trends open opportunities for the application of HPC. We strongly believe, that in the near future, the use of HPC techniques will increase significantly in the field of medical image analysis. Availability and accessibility of HPC infrastructure and applications will of critical importance in this development. 7 Acknowledgements: We would like to thank Marianna Jakab for editorial help. Charles Guttmann and Marianna Jakab provided the MS images. S. Wells and D. Gering provided the linear registration examples. M. Kaus provided the elastic registration examples. R. K. was supported in part by the following grants: NIH: RO1 CA , PO1 CA A1, PO1 AG , NSF: BES Darpa: F , S. W. was in part funded by The National Multiple Sclerosis Society and C-F W. by the Wenner-Gren Foundation. 8 References Bajcsy, R., and S. Kovacic Multiresolution elastic matching. Computer Vision, Graphics, and Image Processing 46: Chabrerie, A., F. Ozlen, S. Nakajima, M. E. Leventon, H. Atsumi, E. Grimson, E. Keeve, S. Helmers, J. Riviello Jr, G. Holmes, F. Duffy, F. Jolesz, R. Kikinis, P. McL. Black Three-Dimensional Reconstruction and Surgical Navigation In Pediatric Epilepsy Surgery. In press in Pediatric Neurosurgery. Chabrerie, A., F. Ozlen, S. Nakajima, M. E. Leventon, H. Atsumi, E. Grimson, F. Jolesz, R. Kikinis, P. McL. Black. 1998a. Three-dimensional Reconstruction for Low-grade Glioma Surgery. Neurosurg. Focus. 4(4). Clarke, L. P., R. P. Velthuizen, S. Phuphanich, J. D. Schellenberg, J. A. Arrington, and M. Silbiger MRI: Stability Of Three Supervised Segmentation Techniques, Magnetic Resonance Imaging, Vol. 11, pp Collins, D., T. Peters, and others Model Based Segmentation of Individual Brain Structures From MRI Data. In SPIE: Visualization in Biomedical Computing, Cover, T. M. and P. E. Hart Nearest Neighbor Pattern Classification, IEEE Transactions On Information Theory, Vol. IT-13, No. 1, pp Cover, T. M Estimation by the Nearest Neighbor Rule. IEEE Transactions On Information Theory Vol. IT-14, No. 1, pp Dengler, Joachim, and Markus Schmidt The Dynamic Pyramid - A Model for Motion Analysis with Controlled Continuity. International Journal of Pattern Recognition and Artificial Intelligence 2 (2):

14 Duda R.O. and P. E. Hart Pattern Classification and Scene Analysis, John Wiley & Sons, Inc. Friedman, J. H., F. Baskett and L. J. Shustek An Algorithm for Finding Nearest Neighbors. IEEE Transactions On Computers. Vol. C-24, No. 10. pp Frigo, Matteo and Steven G. Johnson The Fastest Fourier Transform in the West. MIT-LCS-TR-728. To appear in the Proceedings of the 1998 International Conference on Acoustics, Speech, and Signal Processing, ICASSP '98, Seattle, May , Gerig, G., O. Kuebler, R. Kikinis, and F. A. Jolesz Nonlinear Anisotropic Filtering of MRI Data. IEEE Trans. Med. Imaging 11 (2): Granlund G. H. and H. Knutsson Signal Processing for Computer Vision. Kluwer Academic Publishers. ISBN Huppi, P.S., S. Warfield, R. Kikinis, P. Barnes, G.P. Zientara, F.A. Jolesz, M.K. Tsuji, and J.J. Volpe D Visualization and Quantitation of the Developing Human Brain In Vivo. Annals of Neurology, to appear. Iosifescu, D., Martha E. Shenton, Simon K. Warfield, Ron Kikinis, Joachim Dengler, Ferenc A. Jolesz, and Robert W. McCarley An Automated Measurement of Subcortical Brain MR Structures in Schizophrenia Neuroimage Vol. 6, p Jolesz, F. A. and F. Shtern The operating room of the future. Investigative Radiology, vol. 27(4): Report of the National Cancer Institute Workshop, Imaging-Guided Stereotactic Tumor Diagnosis and Treatment. Jolesz, F. A Image-guided Procedures and the Operating Room of the Future. Radiology 204: Kikinis, R., M. Shenton, F. Jolesz, G. Gerig, J. Martin, M. Anderson, D. Metcalf, C. Guttmann, R. W. McCarley, W. Lorensen, and H. Cline Quantitative Analysis of Brain and Cerebrospinal Fluid Spaces with MR Imaging. JMRI 2: Kikinis, R., P. L. Gleason, T. M. Moriarty, M. R. Moore, E. Alexander, P. E. Stieg, M. Matsumae, W. E. Lorensen, H. E. Cline, P. M. Black, and F. A. Jolesz Computer assisted Interactive Three-dimensional Planning for Neurosurgical Procedures. Neurosurgery 38 (4): Kikinis, R., M. E. Shenton, D. V. Iosifescu, R. W. McCarley, P. Saiviroonporn, H. H. Hokama, A. Robatino, D. Metcalf, C. G. Wible, C. M. Portas, R. Donnino, F. A. Jolesz A Digital Brain Atlas for Surgical Planning, Model Driven Segmentation and Teaching. IEEE Transactions on Visualization and Computer Graphics, Vol.2, No.3. Kikinis, R., C. R. G. Guttmann, D. Metcalf, W. M. Wells, G. J. Ettinger, H. L. Weiner, and F. A. Jolesz Quantitative follow-up of patients with multiple sclerosis using MRI. Part I: Technical aspects. Radiology. Knutsson, H Representing local structure using tensors. In The 6th Scandinavian Conference on Image Analysis, pages , Oulu, Finland. Lacroute, P., and M. Levoy Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation. Paper read at Annual Conference Series, ACM SIGGRAPH, at Orlando, Florida. Lorensen, W. E., and H.E. Cline Marching Cubes: A High Resolution 3D Surface Construction Algorith. Computer Graphics 21 (3): Morocz, I. A., H. Gudbjartsson, T. Kapur, G. P. Zientara, S. Smith, S. Muza, T. Lyons, and F. A. Jolesz Quantification of diffuse brain edema in acute mountain sickness using 3D MRI. Paper read at Society of Magnetic Resonance, at Nice, France. Nakajima, Shin, Hideki Atsumi, Abhir H. Bhalerao, Ferenc A. Jolesz, Ron Kikinis, Toshiki Yoshimine, Thomas M. Moriarty, and Philip E. Stieg Computer-assisted Surgical Planning for Cerebrovascular Neurosurgery. Neurosurgery 41 (2): Nakajima, S., H. Atsumi, R. Kikinis, T. M. Moriarty, D. C. Metcalf, F. A. Jolesz, P. McL. Black. 1997a. Use of Cortical Surface Registration for Image-Guided Neurosurgery. Neurosurgery, Vol. 40, No. 6, p Ozlen, Fatma, Shin Nakajima, Alexandra Chabrerie, Michael E. Leventon, Eric Grimson, Ron Kikinis, Ferenc Jolesz, Peter McL. Black The excision of cortical dysplasia in the language area with a surgical navigator, A Case Report. Accepted for publication in Epilepsia. Saiviroonporn, Pairash, Andre Robatino, Janos Zahajszky, Ron Kikinis, Ferenc A. Jolesz Real Time Interactive 3D-Segmentation. Acad Radiol. Vol. 5, p Schroeder, W., J. Zarge, and W. Lorensen Decimation of Triangle Meshes. Computer Graphics 26 (2): Taubin, G A Signal Processing Approach to Fair Surface Design. Paper read at Computer Graphics.

15 Warfield, S., J. Dengler, J. Zaers, C. R. G. Guttmann, W. M. Wells, G. J. Ettinger, J. Hiller, and R. Kikinis Automatic identification of grey matter structures from MRI to improve the segmentation of white matter lesions. Paper read at Proc. MRCAS '95, at Baltimore, MD. Warfield, Simon, Joachim Dengler, Joachim Zaers, Charles R. G. Guttmann, William M. Wells III, Gil J. Ettinger, John Hiller, and Ron Kikinis Automatic Identification of Grey Matter Structures from MRI to Improve the Segmentation of White Matter Lesions. Journal of Image Guided Surgery 1 ( (6): Warfield, S Fast k-nn Classification for Multichannel Image Data, Pattern Recognition Letters Vol. 17, No. 7. pp Warfield, S., Ferenc Jolesz and Ron Kikinis Parallel Computing. A High Performance Computing Approach to the Registration of Medical Imaging Data, URL = To appear. Wells WM Efficient synthesis of Gaussian filters by cascaded uniform filters. IEEE Trans. Pattern Anal. Mach. Intell. 8: Wells, W. M., W. E. L. Grimson, R. Kikinis, and F. A. Jolesz Adaptive Segmentation of MRI Data. IEEE Transactions on Medical Imaging 15 (4): Wells W. M., P. Viola, H. Atsumi, S. Nakajima, R. Kikinis. 1996a. Multi-Modal Volume Registration by Maximization of Mutual Information. Medical Image Analysis, Vol.1, No.1, p West, J, JM Fitzpatrick, MY Wang, BM Dawant, CR Maurer, RM Kessler, RJ Maciunas, C Barillot, D Lemoine, A Collignon, F Maes, P Suetens, D Vandermeulen, PA van den Elsen, S Napel, TS Sumanaweera, B Harkness, PF Hemler, DL Hill, DJ Hawkes, C Studholme, JB Maintz, MA Viergever, G Malandain and RP Woods Comparison and evaluation of retrospective intermodality brain image registration techniques. J. Comput. Assist. Tomogr. Vol. 21, No. 4, p Westin C-F, A. Bhalerao, H. Knutsson and R. Kikinis Using Local 3D Structure for Segmentation of Bone from Computer Tomography Images. IEEE Conference on Computer Vision and Pattern Recognition (CVPR'97), San Juan, Puerto Rico. Ylä-Jääski, J., F. Klein, O. Kübler Fast Direct Display of Volume Data for Medical Diagnosis. CVGIP: Graphical Models and Image Processing. Vol. 53.No. 1.p7-18.

A Binary Entropy Measure to Assess Nonrigid Registration Algorithms

A Binary Entropy Measure to Assess Nonrigid Registration Algorithms A Binary Entropy Measure to Assess Nonrigid Registration Algorithms Simon K. Warfield 1, Jan Rexilius 1, Petra S. Huppi 2, Terrie E. Inder 3, Erik G. Miller 1, William M. Wells III 1, Gary P. Zientara

More information

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt. Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.edu Outline Motivation why do we need GPUs? Past - how was GPU programming

More information

AnatomyBrowser: A Framework for Integration of Medical Information

AnatomyBrowser: A Framework for Integration of Medical Information In Proc. First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 98), Cambridge, MA, 1998, pp. 720-731. AnatomyBrowser: A Framework for Integration of Medical

More information

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain

More information

ParaVision 6. Innovation with Integrity. The Next Generation of MR Acquisition and Processing for Preclinical and Material Research.

ParaVision 6. Innovation with Integrity. The Next Generation of MR Acquisition and Processing for Preclinical and Material Research. ParaVision 6 The Next Generation of MR Acquisition and Processing for Preclinical and Material Research Innovation with Integrity Preclinical MRI A new standard in Preclinical Imaging ParaVision sets a

More information

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques Alexander Bornik 1, Reinhard Beichel 1, Bernhard Reitinger 1, Georg Gotschuli 2, Erich Sorantin 2, Franz Leberl 1 and Milan Sonka

More information

Image Area. View Point. Medical Imaging. Advanced Imaging Solutions for Diagnosis, Localization, Treatment Planning and Monitoring. www.infosys.

Image Area. View Point. Medical Imaging. Advanced Imaging Solutions for Diagnosis, Localization, Treatment Planning and Monitoring. www.infosys. Image Area View Point Medical Imaging Advanced Imaging Solutions for Diagnosis, Localization, Treatment Planning and Monitoring www.infosys.com Over the years, medical imaging has become vital in the early

More information

Introducing MIPAV. In this chapter...

Introducing MIPAV. In this chapter... 1 Introducing MIPAV In this chapter... Platform independence on page 44 Supported image types on page 45 Visualization of images on page 45 Extensibility with Java plug-ins on page 47 Sampling of MIPAV

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

2. MATERIALS AND METHODS

2. MATERIALS AND METHODS Difficulties of T1 brain MRI segmentation techniques M S. Atkins *a, K. Siu a, B. Law a, J. Orchard a, W. Rosenbaum a a School of Computing Science, Simon Fraser University ABSTRACT This paper looks at

More information

Norbert Schuff Professor of Radiology VA Medical Center and UCSF Norbert.schuff@ucsf.edu

Norbert Schuff Professor of Radiology VA Medical Center and UCSF Norbert.schuff@ucsf.edu Norbert Schuff Professor of Radiology Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics 2012, N.Schuff Course # 170.03 Slide 1/67 Overview Definitions Role of Segmentation Segmentation

More information

Software Packages The following data analysis software packages will be showcased:

Software Packages The following data analysis software packages will be showcased: Analyze This! Practicalities of fmri and Diffusion Data Analysis Data Download Instructions Weekday Educational Course, ISMRM 23 rd Annual Meeting and Exhibition Tuesday 2 nd June 2015, 10:00-12:00, Room

More information

The Design and Implementation of a C++ Toolkit for Integrated Medical Image Processing and Analyzing

The Design and Implementation of a C++ Toolkit for Integrated Medical Image Processing and Analyzing The Design and Implementation of a C++ Toolkit for Integrated Medical Image Processing and Analyzing Mingchang Zhao, Jie Tian 1, Xun Zhu, Jian Xue, Zhanglin Cheng, Hua Zhao Medical Image Processing Group,

More information

The Exploration of Cross-Sectional Data with a Virtual Endoscope

The Exploration of Cross-Sectional Data with a Virtual Endoscope The Exploration of Cross-Sectional Data with a Virtual Endoscope William E. Lorensen, MS GE Corporate Research and Development, Schenectady, NY 1. Introduction Ferenc A. Jolesz, MD Ron Kikinis, MD Brigham

More information

Automotive Applications of 3D Laser Scanning Introduction

Automotive Applications of 3D Laser Scanning Introduction Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc

More information

Interactive 3D Medical Visualization: A Parallel Approach to Surface Rendering 3D Medical Data

Interactive 3D Medical Visualization: A Parallel Approach to Surface Rendering 3D Medical Data Interactive 3D Medical Visualization: A Parallel Approach to Surface Rendering 3D Medical Data Terry S. Yoo and David T. Chen Department of Computer Science University of North Carolina Chapel Hill, NC

More information

A Three-Dimensional Correlation Method for Registration of Medical Images in Radiology

A Three-Dimensional Correlation Method for Registration of Medical Images in Radiology A Three-Dimensional Correlation Method for Registration of Medical Images in Radiology Michalakis F. Georgiou 1, Joachim H. Nagel 2, George N. Sfakianakis 3 1,3 Department of Radiology, University of Miami

More information

Volume visualization I Elvins

Volume visualization I Elvins Volume visualization I Elvins 1 surface fitting algorithms marching cubes dividing cubes direct volume rendering algorithms ray casting, integration methods voxel projection, projected tetrahedra, splatting

More information

MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients

MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients Peter L. Stanchev 1, Farshad Fotouhi 2 1 Kettering University, Flint, Michigan, 48504 USA pstanche@kettering.edu http://www.kettering.edu/~pstanche

More information

Embedded Systems in Healthcare. Pierre America Healthcare Systems Architecture Philips Research, Eindhoven, the Netherlands November 12, 2008

Embedded Systems in Healthcare. Pierre America Healthcare Systems Architecture Philips Research, Eindhoven, the Netherlands November 12, 2008 Embedded Systems in Healthcare Pierre America Healthcare Systems Architecture Philips Research, Eindhoven, the Netherlands November 12, 2008 About the Speaker Working for Philips Research since 1982 Projects

More information

ENG4BF3 Medical Image Processing. Image Visualization

ENG4BF3 Medical Image Processing. Image Visualization ENG4BF3 Medical Image Processing Image Visualization Visualization Methods Visualization of medical images is for the determination of the quantitative information about the properties of anatomic tissues

More information

Morphological analysis on structural MRI for the early diagnosis of neurodegenerative diseases. Marco Aiello On behalf of MAGIC-5 collaboration

Morphological analysis on structural MRI for the early diagnosis of neurodegenerative diseases. Marco Aiello On behalf of MAGIC-5 collaboration Morphological analysis on structural MRI for the early diagnosis of neurodegenerative diseases Marco Aiello On behalf of MAGIC-5 collaboration Index Motivations of morphological analysis Segmentation of

More information

AW Server 3 for Universal Viewer

AW Server 3 for Universal Viewer GE Healthcare AW Server 3 for Universal Viewer Powering Advanced Applications in GE s Universal Viewer for Centricity PACS and Centricity PACS-IW. In today s productivity-focused Radiology environment,

More information

Integration and Visualization of Multimodality Brain Data for Language Mapping

Integration and Visualization of Multimodality Brain Data for Language Mapping Integration and Visualization of Multimodality Brain Data for Language Mapping Andrew V. Poliakov, PhD, Kevin P. Hinshaw, MS, Cornelius Rosse, MD, DSc and James F. Brinkley, MD, PhD Structural Informatics

More information

Data Centric Systems (DCS)

Data Centric Systems (DCS) Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems

More information

Magnetic Resonance Imaging

Magnetic Resonance Imaging Magnetic Resonance Imaging What are the uses of MRI? To begin, not only are there a variety of scanning methodologies available, but there are also a variety of MRI methodologies available which provide

More information

Neuroimaging module I: Modern neuroimaging methods of investigation of the human brain in health and disease

Neuroimaging module I: Modern neuroimaging methods of investigation of the human brain in health and disease 1 Neuroimaging module I: Modern neuroimaging methods of investigation of the human brain in health and disease The following contains a summary of the content of the neuroimaging module I on the postgraduate

More information

THE NAS KERNEL BENCHMARK PROGRAM

THE NAS KERNEL BENCHMARK PROGRAM THE NAS KERNEL BENCHMARK PROGRAM David H. Bailey and John T. Barton Numerical Aerodynamic Simulations Systems Division NASA Ames Research Center June 13, 1986 SUMMARY A benchmark test program that measures

More information

RevoScaleR Speed and Scalability

RevoScaleR Speed and Scalability EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014 RESEARCH ARTICLE OPEN ACCESS A Survey of Data Mining: Concepts with Applications and its Future Scope Dr. Zubair Khan 1, Ashish Kumar 2, Sunny Kumar 3 M.Tech Research Scholar 2. Department of Computer

More information

MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT

MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT C. Reinhart, C. Poliwoda, T. Guenther, W. Roemer, S. Maass, C. Gosch all Volume Graphics GmbH, Heidelberg, Germany Abstract:

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

Best practices for efficient HPC performance with large models

Best practices for efficient HPC performance with large models Best practices for efficient HPC performance with large models Dr. Hößl Bernhard, CADFEM (Austria) GmbH PRACE Autumn School 2013 - Industry Oriented HPC Simulations, September 21-27, University of Ljubljana,

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

ABSTRACT 1. INTRODUCTION 2. RELATED WORK

ABSTRACT 1. INTRODUCTION 2. RELATED WORK Visualization of Time-Varying MRI Data for MS Lesion Analysis Melanie Tory *, Torsten Möller **, and M. Stella Atkins *** Department of Computer Science Simon Fraser University, Burnaby, BC, Canada ABSTRACT

More information

Creation of an Unlimited Database of Virtual Bone. Validation and Exploitation for Orthopedic Devices

Creation of an Unlimited Database of Virtual Bone. Validation and Exploitation for Orthopedic Devices Creation of an Unlimited Database of Virtual Bone Population using Mesh Morphing: Validation and Exploitation for Orthopedic Devices Najah Hraiech 1, Christelle Boichon 1, Michel Rochette 1, 2 Thierry

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Xiaosong Yang 1, Pheng Ann Heng 2, Zesheng Tang 3 1 Department of Computer Science and Technology, Tsinghua University, Beijing

More information

A Flexible Cluster Infrastructure for Systems Research and Software Development

A Flexible Cluster Infrastructure for Systems Research and Software Development Award Number: CNS-551555 Title: CRI: Acquisition of an InfiniBand Cluster with SMP Nodes Institution: Florida State University PIs: Xin Yuan, Robert van Engelen, Kartik Gopalan A Flexible Cluster Infrastructure

More information

Building an Inexpensive Parallel Computer

Building an Inexpensive Parallel Computer Res. Lett. Inf. Math. Sci., (2000) 1, 113-118 Available online at http://www.massey.ac.nz/~wwiims/rlims/ Building an Inexpensive Parallel Computer Lutz Grosz and Andre Barczak I.I.M.S., Massey University

More information

Interactive Level-Set Deformation On the GPU

Interactive Level-Set Deformation On the GPU Interactive Level-Set Deformation On the GPU Institute for Data Analysis and Visualization University of California, Davis Problem Statement Goal Interactive system for deformable surface manipulation

More information

Parallel Analysis and Visualization on Cray Compute Node Linux

Parallel Analysis and Visualization on Cray Compute Node Linux Parallel Analysis and Visualization on Cray Compute Node Linux David Pugmire, Oak Ridge National Laboratory and Hank Childs, Lawrence Livermore National Laboratory and Sean Ahern, Oak Ridge National Laboratory

More information

How To Filter Spam Image From A Picture By Color Or Color

How To Filter Spam Image From A Picture By Color Or Color Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

evm Virtualization Platform for Windows

evm Virtualization Platform for Windows B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400

More information

Interactive Level-Set Segmentation on the GPU

Interactive Level-Set Segmentation on the GPU Interactive Level-Set Segmentation on the GPU Problem Statement Goal Interactive system for deformable surface manipulation Level-sets Challenges Deformation is slow Deformation is hard to control Solution

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

High Performance Computing in CST STUDIO SUITE

High Performance Computing in CST STUDIO SUITE High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver

More information

Computer Organization & Architecture Lecture #19

Computer Organization & Architecture Lecture #19 Computer Organization & Architecture Lecture #19 Input/Output The computer system s I/O architecture is its interface to the outside world. This architecture is designed to provide a systematic means of

More information

Knowledge Discovery from patents using KMX Text Analytics

Knowledge Discovery from patents using KMX Text Analytics Knowledge Discovery from patents using KMX Text Analytics Dr. Anton Heijs anton.heijs@treparel.com Treparel Abstract In this white paper we discuss how the KMX technology of Treparel can help searchers

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

MI Software. Innovation with Integrity. High Performance Image Analysis and Publication Tools. Preclinical Imaging

MI Software. Innovation with Integrity. High Performance Image Analysis and Publication Tools. Preclinical Imaging MI Software High Performance Image Analysis and Publication Tools Innovation with Integrity Preclinical Imaging Molecular Imaging Software Molecular Imaging (MI) Software provides high performance image

More information

GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications

GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications Harris Z. Zebrowitz Lockheed Martin Advanced Technology Laboratories 1 Federal Street Camden, NJ 08102

More information

Scalable Developments for Big Data Analytics in Remote Sensing

Scalable Developments for Big Data Analytics in Remote Sensing Scalable Developments for Big Data Analytics in Remote Sensing Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Leader,

More information

DARPA, NSF-NGS/ITR,ACR,CPA,

DARPA, NSF-NGS/ITR,ACR,CPA, Spiral Automating Library Development Markus Püschel and the Spiral team (only part shown) With: Srinivas Chellappa Frédéric de Mesmay Franz Franchetti Daniel McFarlin Yevgen Voronenko Electrical and Computer

More information

An Interactive Visualization Tool for Nipype Medical Image Computing Pipelines

An Interactive Visualization Tool for Nipype Medical Image Computing Pipelines An Interactive Visualization Tool for Nipype Medical Image Computing Pipelines Ramesh Sridharan, Adrian V. Dalca, and Polina Golland Computer Science and Artificial Intelligence Lab, MIT Abstract. We present

More information

203.4770: Introduction to Machine Learning Dr. Rita Osadchy

203.4770: Introduction to Machine Learning Dr. Rita Osadchy 203.4770: Introduction to Machine Learning Dr. Rita Osadchy 1 Outline 1. About the Course 2. What is Machine Learning? 3. Types of problems and Situations 4. ML Example 2 About the course Course Homepage:

More information

A Tool for creating online Anatomical Atlases

A Tool for creating online Anatomical Atlases A Tool for creating online Anatomical Atlases Summer Scholarship Report Faculty of Science University of Auckland Summer 2003/2004 Daniel Rolf Wichmann UPI: dwic008 UID: 9790045 Supervisor: Burkhard Wuensche

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Visualisation in the Google Cloud

Visualisation in the Google Cloud Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores

More information

Scalability and Classifications

Scalability and Classifications Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

Virtualized Security: The Next Generation of Consolidation

Virtualized Security: The Next Generation of Consolidation Virtualization. Consolidation. Simplification. Choice. WHITE PAPER Virtualized Security: The Next Generation of Consolidation Virtualized Security: The Next Generation of Consolidation As we approach the

More information

Whitepapers on Imaging Infrastructure for Research Paper 1. General Workflow Considerations

Whitepapers on Imaging Infrastructure for Research Paper 1. General Workflow Considerations Whitepapers on Imaging Infrastructure for Research Paper 1. General Workflow Considerations Bradley J Erickson, Tony Pan, Daniel J Marcus, CTSA Imaging Informatics Working Group Introduction The use of

More information

High Quality Image Magnification using Cross-Scale Self-Similarity

High Quality Image Magnification using Cross-Scale Self-Similarity High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg

More information

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson CS 3530 Operating Systems L02 OS Intro Part 1 Dr. Ken Hoganson Chapter 1 Basic Concepts of Operating Systems Computer Systems A computer system consists of two basic types of components: Hardware components,

More information

Capacity Plan. Template. Version X.x October 11, 2012

Capacity Plan. Template. Version X.x October 11, 2012 Template Version X.x October 11, 2012 This is an integral part of infrastructure and deployment planning. It supports the goal of optimum provisioning of resources and services by aligning them to business

More information

Hierarchical Segmentation of Malignant Gliomas via Integrated Contextual Filter Response

Hierarchical Segmentation of Malignant Gliomas via Integrated Contextual Filter Response Hierarchical Segmentation of Malignant Gliomas via Integrated Contextual Filter Response Shishir Dube 1, Jason J Corso 2, Alan Yuille 3, Timothy F. Cloughesy 4, Suzie El-Saden 1, and Usha Sinha 1 1 Medical

More information

The Big Data methodology in computer vision systems

The Big Data methodology in computer vision systems The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: jasjeet@mri.ernet.in 1 Introduction and some local history High performance computing

More information

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, Josef.Pelikan@mff.cuni.cz Abstract 1 Interconnect quality

More information

Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

More information

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based) Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf Flow Visualization Image-Based Methods (integration-based) Spot Noise (Jarke van Wijk, Siggraph 1991) Flow Visualization:

More information

Linköping University Electronic Press

Linköping University Electronic Press Linköping University Electronic Press Book Chapter Multi-modal Image Registration Using Polynomial Expansion and Mutual Information Daniel Forsberg, Gunnar Farnebäck, Hans Knutsson and Carl-Fredrik Westin

More information

Surgery Support System as a Surgeon s Advanced Hand and Eye

Surgery Support System as a Surgeon s Advanced Hand and Eye Surgery Support System as a Surgeon s Advanced Hand and Eye 8 Surgery Support System as a Surgeon s Advanced Hand and Eye Kazutoshi Kan Michio Oikawa Takashi Azuma Shio Miyamoto OVERVIEW: A surgery support

More information

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM

More information

Multisensor Data Fusion and Applications

Multisensor Data Fusion and Applications Multisensor Data Fusion and Applications Pramod K. Varshney Department of Electrical Engineering and Computer Science Syracuse University 121 Link Hall Syracuse, New York 13244 USA E-mail: varshney@syr.edu

More information

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary

More information

Performance Monitoring of Parallel Scientific Applications

Performance Monitoring of Parallel Scientific Applications Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure

More information

Identification algorithms for hybrid systems

Identification algorithms for hybrid systems Identification algorithms for hybrid systems Giancarlo Ferrari-Trecate Modeling paradigms Chemistry White box Thermodynamics System Mechanics... Drawbacks: Parameter values of components must be known

More information

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging Physics of Medical X-Ray Imaging (1) Chapter 3 CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY 3.1 Basic Concepts of Digital Imaging Unlike conventional radiography that generates images on film through

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Intelligent Tools For A Productive Radiologist Workflow: How Machine Learning Enriches Hanging Protocols

Intelligent Tools For A Productive Radiologist Workflow: How Machine Learning Enriches Hanging Protocols GE Healthcare Intelligent Tools For A Productive Radiologist Workflow: How Machine Learning Enriches Hanging Protocols Authors: Tianyi Wang Information Scientist Machine Learning Lab Software Science &

More information

An Experimental Study of the Performance of Histogram Equalization for Image Enhancement

An Experimental Study of the Performance of Histogram Equalization for Image Enhancement International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-2, April 216 E-ISSN: 2347-2693 An Experimental Study of the Performance of Histogram Equalization

More information

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged

More information

High Performance Computing. Course Notes 2007-2008. HPC Fundamentals

High Performance Computing. Course Notes 2007-2008. HPC Fundamentals High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs

More information

GE Healthcare. Centricity* PACS and PACS-IW with Universal Viewer. Universal Viewer. Where it all comes together.

GE Healthcare. Centricity* PACS and PACS-IW with Universal Viewer. Universal Viewer. Where it all comes together. GE Healthcare Centricity* PACS and PACS-IW with Universal Viewer Universal Viewer. Where it all comes together. Where it all comes together Centricity PACS and Centricity PACS-IW with Universal Viewer

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Virtual Desktop Infrastructure www.parallels.com Version 1.0 Table of Contents Table of Contents... 2 Enterprise Desktop Computing Challenges... 3 What is Virtual

More information

100 Gigabit Ethernet is Here!

100 Gigabit Ethernet is Here! 100 Gigabit Ethernet is Here! Introduction Ethernet technology has come a long way since its humble beginning in 1973 at Xerox PARC. With each subsequent iteration, there has been a lag between time of

More information

A General Framework for Tracking Objects in a Multi-Camera Environment

A General Framework for Tracking Objects in a Multi-Camera Environment A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Impedance 50 (75 connectors via adapters)

Impedance 50 (75 connectors via adapters) VECTOR NETWORK ANALYZER PLANAR TR1300/1 DATA SHEET Frequency range: 300 khz to 1.3 GHz Measured parameters: S11, S21 Dynamic range of transmission measurement magnitude: 130 db Measurement time per point:

More information

A Chromium Based Viewer for CUMULVS

A Chromium Based Viewer for CUMULVS A Chromium Based Viewer for CUMULVS Submitted to PDPTA 06 Dan Bennett Corresponding Author Department of Mathematics and Computer Science Edinboro University of PA Edinboro, Pennsylvania 16444 Phone: (814)

More information

IBM Deep Computing Visualization Offering

IBM Deep Computing Visualization Offering P - 271 IBM Deep Computing Visualization Offering Parijat Sharma, Infrastructure Solution Architect, IBM India Pvt Ltd. email: parijatsharma@in.ibm.com Summary Deep Computing Visualization in Oil & Gas

More information