ABSTRACT 1. INTRODUCTION 2. RELATED WORK
|
|
- Leo Henderson
- 2 years ago
- Views:
Transcription
1 Visualization of Time-Varying MRI Data for MS Lesion Analysis Melanie Tory *, Torsten Möller **, and M. Stella Atkins *** Department of Computer Science Simon Fraser University, Burnaby, BC, Canada ABSTRACT Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slice-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel-wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points. Keywords: 4D visualization, multiple sclerosis, MRI, volume rendering, isosurface extraction, glyph 1. INTRODUCTION Multiple sclerosis (MS) is a disease characterized by demyelination of neurons in the central nervous system. Scar tissue and demyelinated areas appear as bright lesions in spin-spin (T2) and proton density (PD) Magnetic Resonance Imaging (MRI) scans; thus, physicians use MRI images to diagnose MS and follow disease progression. One important diagnostic measure is lesion area (on a two-dimensional (2D) image) or lesion volume (in three dimensions (3D)). Current methods to determine these values involve time-consuming manual segmentation of MS lesions on each individual data slice. Furthermore, radiologists interested in disease progression must compare current images with older images of the same patient on a sliceby-slice basis. Since a typical diagnostic scan of 5mm thick slices results in approximately 22 slices each of two data sets (PD and T2), a full comparison of two time steps requires analysis of 88 2D images. Thus, radiologists currently do not have the time or the means to compare MRI scans from any more than two time steps (i.e. the current scan with the most recent previous scan), and cannot easily gain a long-term picture of disease progression. The objective of our study was to visualize changes in MS lesions over time. We explore various methods by combining ideas from volume rendering, isosurface rendering, and flow visualization. 2. RELATED WORK 1. Automated Segmentation Recent work by members of our group (Atkins, Mackiewich, and Krishnan) [2, 7] has produced methods for automating segmentation of the brain and MS lesions from PD and T2 MRI images. This eliminates the need for time-consuming manual segmentation, and thus makes possible fast, automated determination of lesion volume. Our method extends these approaches to calculate and display changes in lesion volume over time, and to render the segmented time series volume data in 3D. 2. Volume Visualization Methods to visualize volume data have been extensively studied and developed. Isosurface extraction, such as that performed by the Marching Cubes algorithm, attempts to find surfaces within a volume and display them using traditional graphics methods [3, 9, 10]. An isovalue is arbitrarily chosen to represent the boundary between what is considered within the surface and what is considered outside the surface. Images generated using surface extraction methods provide information about overall structure in a 3D data set, but provide no information about what lies within the surface. * ** *** 590 Visualization, Display, and Image-Guided Procedures, Seong Ki Mun, Editor, Proceedings of SPIE Vol (2001) 2001 SPIE /01/$15.00
2 By contrast, direct volume rendering attempts to visualize information throughout an entire 3D data set, so the user gains more information than is provided by a simple surface [3, 11]. Each point in the data set is classified according to which tissue it belongs, and is then assigned appropriate color and opacity values (opacity is a measure of how opaque an object is). A classified volume is rendered to the screen using one of many approaches (e.g. ray-casting or splatting). Current research in direct volume rendering focuses on improving both image quality and rendering speed. In addition, recent attempts have been made to find efficient data structures and algorithms for rendering time-series volume data [1, 4]. 3. Flow Visualization Although time-series data is sometimes considered in volume graphics, most research in this area comes from computational fluid dynamics, where various techniques have been developed to visualize flow of liquids and gasses. Glyphs are small pointed objects (e.g. arrows) that are placed within a scene and made to point in the direction of flow [11]. Glyphs can provide interesting information about flow direction at discrete points, but provide no information about flow between the points. To obtain detailed information, additional glyphs must be placed in the scene, which causes clutter. Other methods to visualize fluid flow include streamlines/timelines (lines consisting of a series of particles whose movement in the flow field are followed over time), texture advection (where small textures are warped in the direction of flow), and line integral convolution (an extension of texture advection). A few recent attempts have been made to apply flow visualization ideas to tensor data from diffusion-weighted magnetic resonance imaging. Kindlmann and Weinstein use volume rendering with novel methods for assigning color and opacity values [6]. Laidlaw et al. take a different approach, using oriented "brush strokes" to visualize tensors, in a manner similar to glyph visualization or texture advection [8]. Finally, Zhang et al. [14] visualize tensors with the aid of stream tubes and stream surfaces. 3. METHODS Three distinct visualization approaches (surface rendering, direct volume rendering, and glyphs) were implemented using the Visualization Toolkit (VTK)[11]. Each approach provided a unique view of the underlying data. Prior to visualization, however, significant data preparation was required. Details of preprocessing and visualization methods will be discussed in the following sections. 1. Data preparation Time series data used in this study consisted of eleven time steps of 16 bit T2 and PD MRI brain scans of dimensions 256 X 256 X 22. Each slice of these data sets was segmented to remove the skull, scalp, and eyes from the brain tissue, using the technique described in [2]. Following this, the data was cast to 8 bit since our lesion segmentation program (see next paragraph) required data to be in this form. To ensure a good distribution of values in the 8-bit version, values three standard deviations below the mean intensity were mapped to zero and values three standard deviations above the mean were mapped to 255. To simplify visualization of MS lesion changes, conspicuous and subtle lesions were segmented from each slice in the brain volume data sets using the method described in [7]. Output of the segmentation program consisted of two binary lesion masks, one for conspicuous lesions and one for all lesions (conspicuous plus subtle). To convert this data to a non-binary grey-scale, any voxels with value 1 were replaced by the value at the same position in the eight bit T2 or PD brain data. Essentially the segmentation output was used as a mask to select voxels from the T2 and PD MRI data sets. Since each data set was acquired in a separate session, position of the subject will not correspond exactly between time steps; hence, registration is necessary. Registration is particularly important since MS lesions are relatively small relative to the brain as a whole. A slight change in position of the head could severely affect analysis of lesions within each data slice, since misregistration artefacts could be as great or greater than actual lesion changes [5]. In all cases, registration was performed using the Automated Image Registration (AIR) software package, version 3.08 [12, 13]. Registration parameters were derived using the AIR program alignlinear with the segmented brain volumes as input (not the segmented lesions). A rigid body six-parameter model was used and noninteraction of spatial parameter derivatives was assumed (refer to the AIR documentation for an explanation of AIR arguments [12]). Threshold values of 10 were selected for all data sets, and all other AIR default values were used. Following this, the derived parameters for each time step and volume type (PD or T2) were applied to the brain data and to both lesion data sets for that time step and volume type, using the AIR programs mv_air (to select which file to register) and reslice (to perform registration). All default Proc. SPIE Vol
3 AIR argument values were used here, except that the data sets were not interpolated to cubic voxels; instead, voxel size was kept the same as the input volumes (256 X 256 X 22) so that volumes that were registered would be the same size as the volume they were registered to. The resulting registered data was used as input to our visualization programs. 2. Surface rendering In one approach, we chose to display lesions as 3D contours using traditional computer graphics techniques. Isosurfaces of MS lesions were extracted using the marching cubes algorithm. Isovalues were selected based on histograms such that all lesion tissue in the segmented data sets would appear. Lesion data from one or more time steps could be displayed in the same scene; each time step was displayed in a different color to emphasize changes. In addition, an isosurface of the entire brain (extracted from the registered but unsegmented T2 data) was displayed to help orient the lesions in terms of their spatial position. All isosurfaces were given opacity 0.3 so that they would be translucent, allowing the user to see other isosurfaces below the uppermost one. User interaction was implemented such that a user could rotate, zoom, and pan the 3D display, as well as save an image in any orientation. A variation of the above program allowed a user to select a single viewing orientation and produce a series of images for offline viewing as an animation showing changes over time (from one data step to the next). Because the number of time steps in our data was small, we used linear interpolation between real data sets to produce more frames for a smoother animation. Furthermore, because the program was designed to produce a set of images for off-line viewing, user interaction was not enabled. 3. Direct volume rendering A second approach made use of direct volume rendering, in which all points in the volume contribute to the final image so that more than a simple surface can be seen. However, instead of rendering the lesions themselves, voxel-wise differences between two time steps were calculated and the resulting intensity change was rendered. A color and opacity map was designed to indicate magnitude and direction (positive or negative) of the change at each voxel. Note that because we segmented the lesions in each time step, non-lesion material has an intensity value of zero; hence, size changes appear as very large intensity changes. User interaction and a surface model of the entire brain were incorporated into the program similar to the surface model program described above. As with surface rendering, a variation of the direct volume-rendering program was designed to produce images for off-line animation of cumulative lesions changes. Linear interpolation between time steps was again used to increase the number of frames, and user interaction was not enabled. 4. Glyphs In the final approach, small glyphs were used to illustrate the direction (increase or decrease) and strength of MS lesion changes. First, changes in lesion data between two time steps were found by subtraction. An isosurface was extracted from this change data using the marching cubes algorithm, and displayed for orientation purposes. This surface can be thought of as an "isosurface of change", and was intended to point out areas where change (either positive or negative) was higher than a threshold specified by the user. Glyphs were displayed on the same image to show changes at localized points, both on the isosurface and elsewhere. In one model, glyphs were designed to point in one direction if there was a positive change at the point or the exact opposite direction if the change was negative. They were scaled according to the magnitude of the change, such that a small change resulted in a small glyph and a large change in a large one. In addition, glyphs were colored according to whether the change was negative or positive. We believed that redundant mapping of glyph color and direction to change direction (negative or positive) would emphasize this feature in the data. A second glyph model was similar, but had glyphs point in the direction of the surface normal instead of simply right or left, and did not scale the glyphs according to the amount of change at the point. The idea here was that lesion areas might be thought of as growing outwards in the direction of the surface normal or shrinking inwards in the opposite direction. Note, however, that we do not know for sure whether this is really the case since we have not consulted with radiologists on this issue. In both glyph programs, a surface model of the entire brain could optionally be displayed along with the MS lesions and glyphs for orientation purposes. In addition, the programs were interactive so that the brain and glyphs could be rotated, zoomed, and panned. 592 Proc. SPIE Vol. 4319
4 4. RESULTS AND DISCUSSION 1. Summary Statistics Summary statistics were computed on each data set and compiled into Table 1. Volume values are the sum of MS lesion areas for each slice in the dataset multiplied by slice thickness, and mean values are the average of all non-zero intensities in the volume (i.e. the mean intensity of the lesions, not including the background). Table 1: Summary statistics for time series data (average intensity values and volume in cm 3 ) Time Conspicuous Lesions All Lesions Step Vol (cm 3 ) T2 Mean PD Mean Vol (cm 3 ) T2 Mean PD Mean Figure 1: Changes in total MS lesion volume Conspicuous Lesions All Lesions Time Step Mean As shown in Figure 1 and Table 1, total lesion volume decreased over time, by about 20% for conspicuous lesions and 15% for all lesions. Meanwhile, average PD and T2 intensities of the remaining lesion areas did not change. However, these summary values give no information about localized changes. For example, lesions in one part of the brain could expand while others shrink, and areas of increased intensity could balance areas of decreased intensity. We used 3D visualization techniques to explore such spatial variations. 2. Surface rendering Figure 2 illustrates images generated by the surface extraction algorithm. (Figures 2-5 also appear on a color plate in the paper proceedings.) Each lesion data set is a different color, such that lesions from the first time step are blue, lesions from the last time step are red, and middle time steps are assigned varying shades of purple. Areas containing lesions from more than one time step appear as color combinations. These displays provided a 3D impression of how lesions were oriented within the brain, the amount of volume they occupied, and where large and small lesions were located. Since any number of data sets could be displayed at once, it was possible to see that some older lesions had disappeared (blue areas in Figure 2) and some new lesions had appeared (red areas in Figure 2 (a)). Note however, that because the 3D objects are projected into two dimensions, color combinations may be caused by lesions lying behind one another rather than in exactly the same spot. Rotation of objects on the screen allowed the user to gain additional understanding of 3D orientation, contributing to the resolution of this problem. It was also noticed that with all eleven time steps displayed (see Figure 2 (c)), the image appeared almost entirely red, giving the impression that a large number of new lesions had appeared; however, this coloring was actually caused by the fact that the opacity used (0.3) was too high. Thus the last four data sets in the time series (all in shades of red) occluded all previous data sets (in shades of blue and purple). Lowering the opacity solves this problem, but makes it hard to see areas with only a single lesion. Thus, we believe that interactive tools that allow the user to assign each time step its own opacity and isovalue could significantly improve the usability of the system. Proc. SPIE Vol
5 (a) 2 time steps (2 and 11) (b) 4 time steps (2, 4, 6, and 8) (c) All 11 time steps Figure 2: Surface rendering results for T2 conspicuous lesions. The marching cubes algorithm was used to extract isosurfaces from several registered lesion data sets. All isosurfaces were then rendered to the same window using VTK. An isosurface of the entire brain was rendered in the same way, for orientation purposes. Animations showing changes in lesion surfaces allowed all eleven time steps to be viewed without occlusion; however, since the animation program was not interactive, the brain could not be rotated or zoomed and the program had to be re-run whenever the user wanted to change parameters. An interactive animation is the obvious solution to this problem. 3. Direct volume rendering Surface models provided information about lesion volume and appearance and disappearance of individual lesions, but no information about localized intensity changes. This information may be useful in diagnosis, although the exact meaning of intensity changes in white matter is not yet known. Like the manual methods, our lesion segmentation method relies on the high intensity of the lesion pixels in the PD and T2 images; spectroscopic evaluation of MS lesions and normal-appearing white matter is a current research area. To attempt to visualize intensity information, we used direct volume rendering. Direct volume rendered images of MS lesion changes are displayed in Figure 3. Areas of strong increase appear red, areas of strong decrease appear blue, and relatively unchanged lesion areas appear green. Slightly changed areas appear as linear combinations of appropriate colors. Like the surface model program, the direct volume rendered image contains a surface model of the head, and users can rotate objects in the scene to improve spatial orientation. Animation can be used to display cumulative changes as increasingly bright reds and blues, but like the surface animation, user interaction is lost. (a) Conspicuous lesions (b) All lesions (c) Conspicuous lesions, side view Figure 3: Direct volume rendered images comparing T2 intensity of time steps 2 and 4 of (a, c) conspicuous lesions and (b) all lesions. Areas of increased intensity are shown in red, decreased intensity in blue, and relatively unchanged intensity in green. (c) Side view appears blurry because of low resolution in one dimension of the data. 594 Proc. SPIE Vol. 4319
6 Although we found that overall intensity of the lesions did not change (see Table 1), Figure 3 illustrates that localized changes did occur. For example, in Figure 3 (a), the blue spot in the middle represents an intensity drop, while the red and brown spots to the left and right represent intensity increases. Notice that the blue spot has disappeared in the view containing all lesions (Figure 3 (b)). In this case, subtle lesions not contained in the conspicuous lesion data sets occluded the intensity drop within the conspicuous lesions. This became obvious on rotation of the image when the change in viewpoint reduced the level of occlusion such that the blue spot could once again be seen. Lowering the opacity of each data point may have also prevented occlusion; thus, we believe that a method to interactively change transfer functions (functions that assign opacity and color values to voxels) would be very useful for data analysis. In addition, we found the poor resolution in one dimension of our data sets to be a problem. Resolution was 256 X 256 within each slice, but there were only 22 slices in each data set. The first obvious problem here is that some lesions could be missed altogether in an MRI scan, such that they could seem to appear or disappear from one scan to the next. Furthermore, Figure 3 (c) illustrates that volume rendering these data sets with the viewpoint at the side results in very blurry images that are difficult to understand. 4. Glyphs Figure 4 and Figure 5 show images of glyph models 1 and 2, respectively. In both cases, glyphs appear red if the change at that point was positive and blue if the change was negative. In Figure 4, glyphs are scaled according to the strength of the change and point in opposite directions depending on whether the change was positive or negative. Glyph direction and size alone provide all the information, but it was believed that the addition of a color dimension would increase the speed at which information was conveyed. Figure 4: Glyphs displaying MS lesion intensity changes between time steps 2 and 10, shown with an isosurface of the entire brain. Glyphs at points of increased T2 intensity are red and point right, and glyphs at points of decreased T2 intensity are blue and point left. Length of glyph indicates magnitude of change. Inset shows part of the image after a zoom operation. Figure 5 is similar, but glyphs point in the direction of the surface normal and are not scaled according to strength of the change. The hypothesis here was that glyphs might grow and shrink in the direction of the surface normal; of course, this may or may not be the case in reality. In both glyph models, an isosurface of change indicated areas where change magnitude was greater than a user-specified threshold. In addition, users had control over what factor to scale glyphs by (how big glyphs should be relative to the brain isosurface) and what fraction of glyphs to display. For example, in Figure 5 (a) a low zoom factor is used, so to avoid excessive clutter and ensure glyphs are visible, one would choose a higher scaling factor and a smaller fraction of glyphs to display than for Figure 5 (b), which has a much higher zoom factor. A problem arose, however, since these parameters were specified ahead of time and could not be changed while the program was running. As a result, as users zoom in and out, scaling of the glyphs (relative to the brain) becomes inappropriate. For Proc. SPIE Vol
7 example, the inset in Figure 4 shows glyphs that appear excessively large when the image is zoomed. Information could be conveyed much better if glyph fractions and scaling either (1) changed automatically as the image was zoomed in and out, or (2) could be interactively changed by the user during program execution. (a) (b) Figure 5: Glyphs displaying MS lesion intensity changes between time steps 2 and 10. Glyphs at points of increased T2 intensity are red, and those at points of decreased T2 intensity are blue. Glyphs point in the direction of the surface normal. (a) Zoom factor 1, scaling 0.01, 1/25 of all glyphs are displayed. An isosurface of the entire brain is also shown. (b) Zoom factor 15, scaling 0.002, 1/6 of all glyphs are displayed. Informal evaluation of our results by two physicians revealed that they were more comfortable with surface and volume rendered images than with our glyph images. They felt that the glyphs conveyed a false impression of spatial movement, and were unlike any images they were accustomed to viewing. Training and improved user interaction may or may not alleviate such problems in the future. 5. CONCLUSIONS AND FUTURE WORK Visualization of changes in MS lesions is an interesting problem that is yet to be fully explored. This study attempted to evaluate several visualization methods in terms of their application to three-dimensional MS lesion visualization. Surface models provided useful information about spatial orientation of lesions and changes in lesion volume, but provided no information about intensity changes. Direct volume rendered images of lesion changes addressed this problem by illustrating spatially localized changes of intensity. Glyph images were used to approach the problem from a different perspective. Glyphs clearly illustrate lesion changes at localized points in the brain, but come at the cost of a very cluttered view. Further work is needed to optimize these images to convey information effectively. In addition, we believe that the development of graphic interfaces and improved user interaction will improve the effectiveness of our visualizations. Other future work includes optimizing the programs for speed, exploring other visualization techniques, re-examining our arbitrarily chosen color scales to see if they can be improved, looking at other imaging modalities and applications, and iterative development and analysis with users. ACKNOWLEDGMENTS The authors wish to thank Craig Jones for providing time series data. The assistance of Wilf Rosenbaum and Kevin Siu with existing segmentation programs is much appreciated. REFERENCES 1. Anagnostou, Kostas, Tim J. Atherton, and Andrew E. Waterfall, 4D Volume Rendering With The Shear Warp Factorisation, Volume Visualization and Graphics Symposium 2000, pp , Oct Proc. SPIE Vol. 4319
8 2. Atkins, M.S. and Blair T. Mackiewich, Fully Automatic Segmentation of the Brain in MRI, IEEE Transactions on Medical Imaging. 17(1), pp , Feb Bankman, Isaac N., ed., Handbook of Medical Imaging: Processing and Analysis, Academic Press: San Diego, California, Ellsworth, David, Ling-Jen Chiang, and Han-Wei Shen, Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics, Volume Visualization and Graphics Symposium 2000, pp , 155, Oct Hajnal, Saeed, Oatridge, Williams, Young, and Bydder, Detection of Subtle Brain Changes Using Subvoxel Registration and Subtraction of Serial MR Images, Journal of Computer Assisted Tomography. 19(5), pp , Kindlmann, Gordon, and David Weinstein, Hue-balls and Lit-Tensors for Direct Volume Rendering of Diffusion Tensor Fields, IEEE Visualization 99, Oct Krishnan, K. and M.S. Atkins, Segmentation of Multiple Sclerosis Lesions in MRI An Image Analysis Approach, Proceedings of SPIE International Symposium on Medical Imaging, Feb Laidlaw, David H., Eric T. Ahrens, David Kremers, Matthew J. Avalos, Russell E. Jacobs, and Carol Readhead, Visualizing Diffusion Tensor Images of the Mouse Spinal Cord, IEEE Visualization 98, pp , Oct Levoy, Display of Surfaces from Volume Data, IEEE Computer Graphics and Applications. 8(5), pp , May Lorensen and Cline, Marching Cubes: A High Resolution 3D Surface Construction Algorithm, Computer Graphics. 21(3), pp , July Schroder, Martin, and Lorensen, The Visualization Toolkit, 2 nd ed. Prentice Hall PTR: New Jersey, Woods, Automated Image Registration Documentation Woods, Cherry, and Mazziotta, Rapid automated algorithm for aligning and reslicing PET images, Journal of Computer Assisted Tomography, 16, pp , Zhang, Song, Charles T. Currey, Daniel S. Morris, and David H. Laidlaw, Streamtubes and Streamsurfaces for Visualizing Diffusion Tensor MRI Volume Images, IEEE Visualization 2000: Work in Progress. Oct Proc. SPIE Vol
9 COLOR PLATE (a) 2 time steps (2 and 11) (b) 4 time steps (2, 4, 6, and 8) (c) All 11 time steps Figure 2: Surface rendering results for conspicuous lesions. The marching cubes algorithm was used to extract isosurfaces from several registered lesion data sets. All isosurfaces were then rendered to the same window using VTK. An isosurface of the entire brain was rendered in the same way, for orientation purposes. (a) conspicuous lesions (b) all lesions (c) conspicuous lesions, side view Figure 3: Direct volume rendered images comparing T2 intensity of time steps 2 and 4 of (a, c) conspicuous lesions and (b) all lesions. Areas of increased T2 intensity are shown in red, decreased T2 intensity in blue, and relatively unchanged intensity in green. (a) (b) Figure 4: Glyphs displaying MS lesion changes between time steps 2 and 10, shown with an isosurface of the entire brain. Glyphs: red and point right = increased intensity, blue and point left = decreased intensity. Length of glyph indicates magnitude of change. Inset shows part of the image after a zoom operation. Figure 5: Glyphs displaying MS lesion changes between time steps 2 and 10. Glyphs at points of increased intensity are red, and those at points of decreased intensity are blue. Glyphs point in the direction of the surface normal. (a) Zoom factor 1, scaling 0.01, 1/25 of all glyphs are displayed. An isosurface of the entire brain is also shown. (b) Zoom factor 15, scaling 0.002, 1/6 of all glyphs are displayed. 598 Proc. SPIE Vol. 4319
2. MATERIALS AND METHODS
Difficulties of T1 brain MRI segmentation techniques M S. Atkins *a, K. Siu a, B. Law a, J. Orchard a, W. Rosenbaum a a School of Computing Science, Simon Fraser University ABSTRACT This paper looks at
Volume visualization I Elvins
Volume visualization I Elvins 1 surface fitting algorithms marching cubes dividing cubes direct volume rendering algorithms ray casting, integration methods voxel projection, projected tetrahedra, splatting
Data Visualization (DSC 530/CIS )
Data Visualization (DSC 530/CIS 602-01) Volume Rendering Dr. David Koop Fields Tables Networks & Trees Fields Geometry Clusters, Sets, Lists Items Items (nodes) Grids Items Items Attributes Links Positions
Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.
Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.edu Outline Motivation why do we need GPUs? Past - how was GPU programming
Interactive Visualization of Magnetic Fields
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 21 No. 1 (2013), pp. 107-117 Interactive Visualization of Magnetic Fields Piotr Napieralski 1, Krzysztof Guzek 1 1 Institute of Information Technology, Lodz University
QAV-PET: A Free Software for Quantitative Analysis and Visualization of PET Images
QAV-PET: A Free Software for Quantitative Analysis and Visualization of PET Images Brent Foster, Ulas Bagci, and Daniel J. Mollura 1 Getting Started 1.1 What is QAV-PET used for? Quantitative Analysis
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam cjam@sdsc.edu Mike Bailey mjb@sdsc.edu San Diego Supercomputer Center University of California San Diego Abstract With
Introducing MIPAV. In this chapter...
1 Introducing MIPAV In this chapter... Platform independence on page 44 Supported image types on page 45 Visualization of images on page 45 Extensibility with Java plug-ins on page 47 Sampling of MIPAV
Overview. Introduction 3D Projection. Volume Rendering. Isosurface Rendering. Sofware. Raytracing Modes for 3D rendering
3D rendering Overview Introduction 3D Projection Raytracing Modes for 3D rendering Volume Rendering Maximum intensity projection Direct Volume Rendering Isosurface Rendering Wireframing Sofware Amira Imaris
AnatomyBrowser: A Framework for Integration of Medical Information
In Proc. First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 98), Cambridge, MA, 1998, pp. 720-731. AnatomyBrowser: A Framework for Integration of Medical
Introduction to Flow Visualization
Introduction to Flow Visualization This set of slides developed by Prof. Torsten Moeller, at Simon Fraser Univ and Professor Jian Huang, at University of Tennessee, Knoxville And some other presentation
MRI: Magnetic resonance imaging:
MRI: Magnetic resonance imaging: Like CT, MRI produces detailed cross sectional images of body regions. However the physical principles of image acquisition are different and very complicated. The following
ANISOTROPY BASED SEEDING FOR HYPERSTREAMLINE
ANISOTROPY BASED SEEDING FOR HYPERSTREAMLINE Wei Shen and Alex Pang Computer Science Department University of California Santa Cruz, CA 95064, USA email: weishen, pang@cse.ucsc.edu ABSTRACT This paper
MayaVi: A free tool for CFD data visualization
MayaVi: A free tool for CFD data visualization Prabhu Ramachandran Graduate Student, Dept. Aerospace Engg. IIT Madras, Chennai, 600 036. e mail: prabhu@aero.iitm.ernet.in Keywords: Visualization, CFD data,
A Comparison of Five Methods for Signal Intensity Standardization in MRI
A Comparison of Five Methods for Signal Intensity Standardization in MRI Jan-Philip Bergeest, Florian Jäger Lehrstuhl für Mustererkennung, Friedrich-Alexander-Universität Erlangen-Nürnberg jan.p.bergeest@informatik.stud.uni-erlangen.de
CONSTRAST ENHANCEMENT OF STROKE LESIONS IN CT HEAD IMAGES
CONSTRAST ENHANCEMENT OF STROKE LESIONS IN CT HEAD IMAGES Jan Hořejš Institute of Chemical Technology, Department of Computing and Control Engineering Abstract The paper presets a method for a contrast
RUN-LENGTH ENCODING FOR VOLUMETRIC TEXTURE
RUN-LENGTH ENCODING FOR VOLUMETRIC TEXTURE Dong-Hui Xu, Arati S. Kurani, Jacob D. Furst, Daniela S. Raicu Intelligent Multimedia Processing Laboratory, School of Computer Science, Telecommunications, and
ENG4BF3 Medical Image Processing. Image Visualization
ENG4BF3 Medical Image Processing Image Visualization Visualization Methods Visualization of medical images is for the determination of the quantitative information about the properties of anatomic tissues
ALVEY IKBS/053 AUTOMATIC EXTRACTION OF HIGH TEXTURE FEATURES USED FOR PATIENT REALIGNMENT IN MEDICAL IMAGING OF THE HEAD
ALVEY IKBS/053 AUTOMATIC EXTRACTION OF HIGH TEXTURE FEATURES USED FOR PATIENT REALIGNMENT IN MEDICAL IMAGING OF THE HEAD Saeed N, Durrani T.S, Marshall S Department of Electrical & Electronic Engineering
Twelve. Figure 12.1: 3D Curved MPR Viewer Window
Twelve The 3D Curved MPR Viewer This Chapter describes how to visualize and reformat a 3D dataset in a Curved MPR plane: Curved Planar Reformation (CPR). The 3D Curved MPR Viewer is a window opened from
Contrast enhancement of soft tissues in Computed Tomography images
Contrast enhancement of soft tissues in Computed Tomography images Roman Lerman, Daniela S. Raicu, Jacob D. Furst Intelligent Multimedia Processing Laboratory School of Computer Science, Telecommunications,
Direct Volume Rendering Elvins
Direct Volume Rendering Elvins 1 Principle: rendering of scalar volume data with cloud-like, semi-transparent effects forward mapping object-order backward mapping image-order data volume screen voxel
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Visualization with ParaView
Visualization with ParaView Before we begin Make sure you have ParaView 4.1.0 installed so you can follow along in the lab section http://paraview.org/paraview/resources/software.php Background http://www.paraview.org/
Authors: Masahiro Watanabe*, Motoi Okuda**, Teruo Matsuzawa*** Speaker: Masahiro Watanabe**
Visualization of the Blood flow and the Stress distribution with the Diagnostic Support System for Circulatory Disease in the Volume Communications Environment Authors: Masahiro Watanabe*, Motoi Okuda**,
Overview. 2D Texture Map Review. 2D Texture Map Hardware. Texture-Based Direct Volume Rendering
Overview Texture-Based Direct Volume Rendering Department of Computer Science University of New Hampshire Durham, NH 03824 Based on: Van Gelder and Kim, Direct volume rendering with shading via 3D textures,
MR Perfusion Visualization in 3D Image
380 PIERS Proceedings, Kuala Lumpur, MALAYSIA, March 27 30, 2012 MR Perfusion Visualization in 3D Image M. Cap 1, E. Gescheidtova 1, P. Marcon 1, K. Bartusek 2, and E. Kroutilova 1 1 Department of Theoretical
Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques
Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques Alexander Bornik 1, Reinhard Beichel 1, Bernhard Reitinger 1, Georg Gotschuli 2, Erich Sorantin 2, Franz Leberl 1 and Milan Sonka
A Three-Dimensional Correlation Method for Registration of Medical Images in Radiology
A Three-Dimensional Correlation Method for Registration of Medical Images in Radiology Michalakis F. Georgiou 1, Joachim H. Nagel 2, George N. Sfakianakis 3 1,3 Department of Radiology, University of Miami
Layout Based Visualization Techniques for Multi Dimensional Data
Layout Based Visualization Techniques for Multi Dimensional Data Wim de Leeuw Robert van Liere Center for Mathematics and Computer Science, CWI Amsterdam, the Netherlands wimc,robertl @cwi.nl October 27,
DATA VISUALIZATION GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF GEOGRAPHIC INFORMATION SYSTEMS AN INTRODUCTORY TEXTBOOK CHAPTER 7
DATA VISUALIZATION GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF GEOGRAPHIC INFORMATION SYSTEMS AN INTRODUCTORY TEXTBOOK CHAPTER 7 Contents GIS and maps The visualization process Visualization and strategies
Determining optimal window size for texture feature extraction methods
IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec
WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS
WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS Nguyen Dinh Duong Department of Environmental Information Study and Analysis, Institute of Geography, 18 Hoang Quoc Viet Rd.,
Building 3D anatomical scenes on the Web
Building 3D anatomical scenes on the Web F. Evesque, S. Gerlach, R. D. Hersch Ecole Polytechnique Fédérale de Lausanne (EPFL) http://visiblehuman.epfl.ch Abstract We propose a new service for building
Image Registration and Fusion. Professor Michael Brady FRS FREng Department of Engineering Science Oxford University
Image Registration and Fusion Professor Michael Brady FRS FREng Department of Engineering Science Oxford University Image registration & information fusion Image Registration: Geometric (and Photometric)
Visualisatie BMT. Introduction, visualization, visualization pipeline. Arjan Kok Huub van de Wetering (h.v.d.wetering@tue.nl)
Visualisatie BMT Introduction, visualization, visualization pipeline Arjan Kok Huub van de Wetering (h.v.d.wetering@tue.nl) 1 Lecture overview Goal Summary Study material What is visualization Examples
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
ParaVision 6. Innovation with Integrity. The Next Generation of MR Acquisition and Processing for Preclinical and Material Research.
ParaVision 6 The Next Generation of MR Acquisition and Processing for Preclinical and Material Research Innovation with Integrity Preclinical MRI A new standard in Preclinical Imaging ParaVision sets a
A Tool for creating online Anatomical Atlases
A Tool for creating online Anatomical Atlases Summer Scholarship Report Faculty of Science University of Auckland Summer 2003/2004 Daniel Rolf Wichmann UPI: dwic008 UID: 9790045 Supervisor: Burkhard Wuensche
Medical Imaging Specialists and 3D: A Domain Perspective on Mobile 3D Interactions
Medical Imaging Specialists and 3D: A Domain Perspective on Mobile 3D Interactions Teddy Seyed teddy.seyed@ucalgary.ca Frank Maurer frank.maurer@ucalgary.ca Francisco Marinho Rodrigues fm.rodrigues@ucalgary.ca
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work
MRI DATA PROCESSING. Compiled by: Nicolas F. Lori and Carlos Ferreira. Introduction
MRI DATA PROCESSING Compiled by: Nicolas F. Lori and Carlos Ferreira Introduction Magnetic Resonance Imaging (MRI) is a clinical exam that is safe to the patient. Nevertheless, it s very important to attend
Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC
Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain
Working With Animation: Introduction to Flash
Working With Animation: Introduction to Flash With Adobe Flash, you can create artwork and animations that add motion and visual interest to your Web pages. Flash movies can be interactive users can click
Go to contents 18 3D Visualization of Building Services in Virtual Environment
3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer
Files Used in this Tutorial
Generate Point Clouds Tutorial This tutorial shows how to generate point clouds from IKONOS satellite stereo imagery. You will view the point clouds in the ENVI LiDAR Viewer. The estimated time to complete
The Design and Implementation of a C++ Toolkit for Integrated Medical Image Processing and Analyzing
The Design and Implementation of a C++ Toolkit for Integrated Medical Image Processing and Analyzing Mingchang Zhao, Jie Tian 1, Xun Zhu, Jian Xue, Zhanglin Cheng, Hua Zhao Medical Image Processing Group,
Spatio-Temporal Mapping -A Technique for Overview Visualization of Time-Series Datasets-
Progress in NUCLEAR SCIENCE and TECHNOLOGY, Vol. 2, pp.603-608 (2011) ARTICLE Spatio-Temporal Mapping -A Technique for Overview Visualization of Time-Series Datasets- Hiroko Nakamura MIYAMURA 1,*, Sachiko
Visualisation in the Google Cloud
Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores
Segmentation of Breast Tumor from Mammographic Images Using Histogram Peak Slicing Threshold
International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-1 E-ISSN: 2347-2693 Segmentation of Breast Tumor from Mammographic Images Using Histogram Peak
Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features
Remote Sensing and Geoinformation Lena Halounová, Editor not only for Scientific Cooperation EARSeL, 2011 Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode Value
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Construction of Virtual Environment for Endoscopy S. Loncaric and T. Markovinovic and T. Petrovic and D. Ramljak and E. Sorantin Department of Electronic Systems and Information Processing Faculty of Electrical
Chess Vision. Chua Huiyan Le Vinh Wong Lai Kuan
Chess Vision Chua Huiyan Le Vinh Wong Lai Kuan Outline Introduction Background Studies 2D Chess Vision Real-time Board Detection Extraction and Undistortion of Board Board Configuration Recognition 3D
TINA Demo 003: The Medical Vision Tool
TINA Demo 003 README TINA Demo 003: The Medical Vision Tool neil.thacker(at)manchester.ac.uk Last updated 6 / 10 / 2008 (a) (b) (c) Imaging Science and Biomedical Engineering Division, Medical School,
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University
IGSTK: a software toolkit for image-guided surgery applications
IGSTK: a software toolkit for image-guided surgery applications Kevin Cleary a,*, Luis Ibanez b, Sohan Ranjan a, Brian Blake c a Imaging Science and Information Systems (ISIS) Center, Department of Radiology,
Visualization with ParaView. Greg Johnson
Visualization with Greg Johnson Before we begin Make sure you have 3.8.0 installed so you can follow along in the lab section http://paraview.org/paraview/resources/software.html http://www.paraview.org/
Thinking ahead. Focused on life. REALIZED: GROUNDBREAKING RESOLUTION OF 80 µm VOXEL
Thinking ahead. Focused on life. REALIZED: GROUNDBREAKING RESOLUTION OF 80 µm VOXEL X-ray ZOOM RECONSTRUCTION Flat Panel Detector (FPD) Automatic Positioning Function For ø 40 x H 40 mm, ø 60 x H 60 mm,
Removal of Noise from MRI using Spectral Subtraction
International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 3 (2014), pp. 293-298 International Research Publication House http://www.irphouse.com Removal of Noise
Graphical displays are generally of two types: vector displays and raster displays. Vector displays
Display technology Graphical displays are generally of two types: vector displays and raster displays. Vector displays Vector displays generally display lines, specified by their endpoints. Vector display
The STC for Event Analysis: Scalability Issues
The STC for Event Analysis: Scalability Issues Georg Fuchs Gennady Andrienko http://geoanalytics.net Events Something [significant] happened somewhere, sometime Analysis goal and domain dependent, e.g.
Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint
Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint While it is, of course, possible to create a Research Day poster using a graphics editing programme such as Adobe
B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.
B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Surgery Support System as a Surgeon s Advanced Hand and Eye
Surgery Support System as a Surgeon s Advanced Hand and Eye 8 Surgery Support System as a Surgeon s Advanced Hand and Eye Kazutoshi Kan Michio Oikawa Takashi Azuma Shio Miyamoto OVERVIEW: A surgery support
Detection and Classification of Liver Cancer using CT Images
Detection and Classification of Liver Cancer using CT Images Abhay Krishan 1 1 Electrical and Instrumentation Engineering Department, Thapar University, Patiala-147004, Punjab, India abhaykrishan44@gmail.com
Comp 410/510. Computer Graphics Spring 2016. Introduction to Graphics Systems
Comp 410/510 Computer Graphics Spring 2016 Introduction to Graphics Systems Computer Graphics Computer graphics deals with all aspects of creating images with a computer Hardware (PC with graphics card)
Image-Based Transfer Function Design for Data Exploration in Volume Visualization
Image-Based Transfer Function esign for ata Exploration in Volume Visualization Shiaofen Fang Tom Biddlecome Mihran Tuceryan epartment of Computer and Information Science Indiana University Purdue University
Image Content-Based Email Spam Image Filtering
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
THE 360 TOOLBAR. Classic. 360 (3D environment) The 3D environment: Take a tour. Return to the original (acquisition) orientation.
Neurolucida 360 consists of two windows: Classic and 360. The classic window gives you access to all the tools you re already accustomed to in Neurolucida. Classic THE 360 TOOLBAR Return to the original
George W. Seeley, Ph.D.*t. Kevin MCNeiII, M.S.t. William J. Dallas, ph.d.*t. Past
RathoGphlcsdexten Displays in radiology: Curnutafiveindoxterms: Past, present, a nd future George W. Seeley, Ph.D.*t Kevin MCNeiII, M.S.t William J. Dallas, ph.d.*t Past. Almost immediately after Wilhelm
Non-Photorealistic Rendering
15-462 Computer Graphics I Lecture 22 Non-Photorealistic Rendering Pen-and-Ink Illustrations Painterly Rendering Cartoon Shading Technical Illustrations Acknowledgment: Steve Lin April 25, 2002 Frank Pfenning
Non-Photorealistic Rendering
15-462 Computer Graphics I Lecture 22 Non-Photorealistic Rendering Pen-and-Ink Illustrations Painterly Rendering Cartoon Shading Technical Illustrations Acknowledgment: Steve Lin April 25, 2002 Frank Pfenning
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee
EXTENDED 3D CT METHOD FOR THE INSPECTION OF LARGE COMPONENTS M. Simon, C. Sauerwein, and I. Tiseanu Hans Wälischmiller GmbH, Meersburg, Germany
EXTENDED 3D CT METHOD FOR THE INSPECTION OF LARGE COMPONENTS M. Simon, C. Sauerwein, and I. Tiseanu Hans Wälischmiller GmbH, Meersburg, Germany Abstract: Conventional X-ray 3D computed tomography requires
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed
Researchers look into mechanisms of multiple sclerosis using 7T MRI
Publication for the Philips MRI Community Issue 46 2012/2 Researchers look into mechanisms of multiple sclerosis using 7T MRI OSU is using Achieva 7.0T hoping ultrahigh field imaging may help understand
A typical 3D modeling system involves the phases of 1. Individual range image acquisition from different viewpoints.
Efficient Model Creation of Large Structures based on Range Segmentation 2nd International Symposium on 3D Data Processing, Visualization & Transmission, September 2004, Thessaloniki, Greece. Ioannis Stamos
Virtual Reality Techniques for the Visualization of Biomedical Imaging Data
Virtual Reality Techniques for the Visualization of Biomedical Imaging Data M. A. Shaw, w. B. Spiliman Jr., K. E. Meissnera, Gabbardc athe Optical Sciences and Engineering Research Center, Virginia Polytechnic
Clipping Plane. Overview Posterior
Automated 3D Video Documentation for the Analysis of Medical Data S. Iserhardt-Bauer 1, C. Rezk-Salama 2, T. Ertl 1,P. Hastreiter 3,B.Tomandl 4, und K. Eberhardt 4 1 Visualization and Interactive Systems
resource Osirix as a Introduction
as a osirix-viewer.com/snapshots.html by Tonya Limberg 2008, MS, Biomedical Visualization, University of Illinois at Chicago Introduction is a freeware program available to the public on the Apple Inc.
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time
Module 8: IQ: Acquisition Time IQ : Acquisition Time Objectives...Describe types of data acquisition modes....compute acquisition times for 2D and 3D scans. 2D Acquisitions The 2D mode acquires and reconstructs
Introduction to Visualization with VTK and ParaView
Introduction to Visualization with VTK and ParaView R. Sungkorn and J. Derksen Department of Chemical and Materials Engineering University of Alberta Canada August 24, 2011 / LBM Workshop 1 Introduction
Morphological analysis on structural MRI for the early diagnosis of neurodegenerative diseases. Marco Aiello On behalf of MAGIC-5 collaboration
Morphological analysis on structural MRI for the early diagnosis of neurodegenerative diseases Marco Aiello On behalf of MAGIC-5 collaboration Index Motivations of morphological analysis Segmentation of
A Method of Caption Detection in News Video
3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.
ELECTRODE GRID VISUALIZATION. Using Analyze
ELECTRODE GRID VISUALIZATION Using Analyze 2 Table of Contents 1. Introduction page 3 2. Co-Registration of Pre-Implantation MRI to the Post-Implatation CT page 4 I. Automatic Registration page 5 3. Segmentation
Handwritten Character Recognition from Bank Cheque
International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-1 E-ISSN: 2347-2693 Handwritten Character Recognition from Bank Cheque Siddhartha Banerjee*
Enhanced LIC Pencil Filter
Enhanced LIC Pencil Filter Shigefumi Yamamoto, Xiaoyang Mao, Kenji Tanii, Atsumi Imamiya University of Yamanashi {daisy@media.yamanashi.ac.jp, mao@media.yamanashi.ac.jp, imamiya@media.yamanashi.ac.jp}
Silverlight for Windows Embedded Graphics and Rendering Pipeline 1
Silverlight for Windows Embedded Graphics and Rendering Pipeline 1 Silverlight for Windows Embedded Graphics and Rendering Pipeline Windows Embedded Compact 7 Technical Article Writers: David Franklin,
USER S GUIDE TO MODEL VIEWER, A PROGRAM FOR THREE-DIMENSIONAL VISUALIZATION OF GROUND-WATER MODEL RESULTS
USER S GUIDE TO MODEL VIEWER, A PROGRAM FOR THREE-DIMENSIONAL VISUALIZATION OF GROUND-WATER MODEL RESULTS Open-File Report 02-106 U.S. Department of the Interior U.S. Geological Survey USER S GUIDE TO
imagic PRO Version 2.0 User Manual
imagic PRO Version 2.0 User Manual 2007 DISCLAIMER The imagic software is designed for use in medical research, it has not the status of a medical product, i.e. it is not allowed to use it directly or
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT
MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT C. Reinhart, C. Poliwoda, T. Guenther, W. Roemer, S. Maass, C. Gosch all Volume Graphics GmbH, Heidelberg, Germany Abstract:
GRAVE: An Interactive Geometry Construction and Visualization Software System for the TORT Radiation Transport Code
GRAVE: An Interactive Geometry Construction and Visualization Software System for the TORT Radiation Transport Code E. D. Blakeman Oak Ridge National Laboratory Oak Ridge, TN 37831 edb@ornl.gov ABSTRACT
Proc. First Users Conference of the National Library of Medicine's Visible Human Project, active contours free forms Triangle meshes may be imp
Proc. First Users Conference of the National Library of Medicine's Visible Human Project, 1996 1 Creation of Finite Element Models of Human Body based upon Tissue Classied Voxel Representations M. Muller,
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia