Integration and Visualization of Multimodality Brain Data for Language Mapping



Similar documents
Cognitive Neuroscience. Questions. Multiple Methods. Electrophysiology. Multiple Methods. Approaches to Thinking about the Mind

Why do we have so many brain coordinate systems? Lilla ZölleiZ WhyNHow seminar 12/04/08

ParaVision 6. Innovation with Integrity. The Next Generation of MR Acquisition and Processing for Preclinical and Material Research.

Functional neuroimaging. Imaging brain function in real time (not just the structure of the brain).

Software Packages The following data analysis software packages will be showcased:

ENG4BF3 Medical Image Processing. Image Visualization

How To Map Language With Magnetic Source Imaging

How are Parts of the Brain Related to Brain Function?

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye MSRC

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute

NEURO M203 & BIOMED M263 WINTER 2014

QAV-PET: A Free Software for Quantitative Analysis and Visualization of PET Images

Obtaining Knowledge. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology.

A Three-Dimensional Correlation Method for Registration of Medical Images in Radiology

2. MATERIALS AND METHODS

CHILDREN S NEUROSCIENCE CENTER

runl I IUI%I/\L Magnetic Resonance Imaging

Fall 2013 to present Assistant Professor, Department of Psychological and Brain Sciences, Johns Hopkins University

Neuroimaging module I: Modern neuroimaging methods of investigation of the human brain in health and disease

Radiation therapy involves using many terms you may have never heard before. Below is a list of words you could hear during your treatment.

Voxar 3D TM. A suite of advanced visualization and analysis software tools

2 Neurons. 4 The Brain: Cortex

MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients

MRI DATA PROCESSING. Compiled by: Nicolas F. Lori and Carlos Ferreira. Introduction

Brain Computer Interfaces (BCI) Communication Training of brain activity

Medical Imaging Specialists and 3D: A Domain Perspective on Mobile 3D Interactions

Building 3D anatomical scenes on the Web

multimodality image processing workstation Visualizing your SPECT-CT-PET-MRI images

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

Construction of human knee bone joint model using FDM based 3D Printer from CT scan images.

Free software solutions for MEG/EEG source imaging

Magnetic Resonance Imaging

Human vs. Robotic Tactile Sensing: Detecting Lumps in Soft Tissue

Image Area. View Point. Medical Imaging. Advanced Imaging Solutions for Diagnosis, Localization, Treatment Planning and Monitoring.

Introducing MIPAV. In this chapter...

FUNCTIONAL EEG ANALYZE IN AUTISM. Dr. Plamen Dimitrov

GUIDE TO SETTING UP AN MRI RESEARCH PROJECT

Robot Perception Continued

The Wondrous World of fmri statistics

USE OF DEMONSTRATIVE EVIDENCE IN THE TRIAL OF A MILD TRAUMATIC BRAIN INJURY CASE. Your client, who sustained a closed head injury in an

second language, are more effective than others in causing the brain to develop new neural networks.

Brain Extraction, Registration & EPI Distortion Correction

How To Develop An Image Guided Software Toolkit

3D Visualization of FreeSurfer Data Sonia Pujol, Ph.D. Silas Mann, B.Sc. Randy Gollub, MD., Ph.D.

Computer Animation and Visualisation. Lecture 1. Introduction

SITE IMAGING MANUAL ACRIN 6698

anatomage table Interactive anatomy study table

GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time

The File-Card-Browser View for Breast DCE-MRI Data

QUANTITATIVE IMAGING IN MULTICENTER CLINICAL TRIALS: PET

Mimics Innovation Suite. Software and Services for Engineering on Anatomy TM

Short Courses in Imaging Sciences

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques

Virtual 3D Model of the Lumbar Spine

Volume visualization I Elvins

Subjects: Fourteen Princeton undergraduate and graduate students were recruited to

MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT

Using Neuroscience to Understand the Role of Direct Mail

An Instructional Aid System for Driving Schools Based on Visual Simulation

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

Video-Based Eye Tracking

Advanced MRI methods in diagnostics of spinal cord pathology

GE Global Research. The Future of Brain Health

Fast mobile reading on the go November syngo.via WebViewer. Unrestricted Siemens AG 2013 All rights reserved.

Template-based Eye and Mouth Detection for 3D Video Conferencing

Slide 4: Forebrain Structures. Slide 5: 4 Lobes of the Cerebral Cortex. Slide 6: The Cerebral Hemispheres (L & R)

Musculoskeletal MRI Technical Considerations

TELEDIAGNOSTICS OF FRACTURE HEALING PROGRESS

Watching the brain remember

Prepublication Requirements

Configuration and Use of the MR Knee Dot Engine

Current Industry Neuroimaging Experience in Clinical Trials Jerome Barakos, M.D.

3D/4D acquisition. 3D acquisition taxonomy Computer Vision. Computer Vision. 3D acquisition methods. passive. active.

Case Report: Whole-body Oncologic Imaging with syngo TimCT

What Is an Arteriovenous Malformation (AVM)?

resource Osirix as a Introduction

MDCT Technology. Kalpana M. Kanal, Ph.D., DABR Assistant Professor Department of Radiology University of Washington Seattle, Washington

ABSTRACT INTRODUCTION

5 Factors Affecting the Signal-to-Noise Ratio

Unwarping Echo Planar Images Using CMTK 1 Release 1.2

Modul A: Physiologische Grundlagen des Verhaltens Module A: Physiological Bases of Behavior (8 Credit Points)

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering

Visual area MT responds to local motion. Visual area MST responds to optic flow. Visual area STS responds to biological motion. Macaque visual areas

Brain Tumor Treatment

What You Should Know About Cerebral Aneurysms

NEW HYBRID IMAGING TECHNOLOGY MAY HAVE BIG POTENTIAL FOR IMPROVING DIAGNOSIS OF PROSTATE CANCER

Transcription:

Integration and Visualization of Multimodality Brain Data for Language Mapping Andrew V. Poliakov, PhD, Kevin P. Hinshaw, MS, Cornelius Rosse, MD, DSc and James F. Brinkley, MD, PhD Structural Informatics Group, Department of Biological Structure University of Washington, Seattle, Washington USA ABSTRACT A goal of the University of Washington Brain Project is to develop software tools for processing, integrating and visualizing multimodality language data obtained at the time of neurosurgery, both for surgical planning and for the study of language organization in the brain. Data from a single patient consist of four magnetic resonance-based image volumes, showing anatomy, veins, arteries and functional activation (fmri). The data also include the location, on the exposed cortical surface, of sites that were electrically stimulated for the presence of language. These five sources are mapped to a common MRbased neuroanatomical model, then visualized to gain a qualitative appreciation of their relationships, prior to quantitative analysis. These procedures are described and illustrated, with emphasis on the visualization of fmri activation, which may be deep in the brain, with respect to surface-based stimulation sites. INTRODUCTION The decade of the brain has seen an unprecedented growth in methods for imaging the functioning brain. Techniques such as positron emission tomography (PET), magnetoencephalography (MEG), and functional magnetic resonance imaging (fmri) provide insight into localized brain activity for specific cognitive tasks, to a degree that it is now possible to contemplate the creation of cognitive models supported by real data. In order to validate these models it is necessary to integrate multiple forms of imaging and neurophysiological data, since no one technique provides all the information. A major tenet of the University of Washington (UW) Structural Informatics Group (and of the entire national Human Brain Project) is that anatomy provides the substrate on which these disparate forms of data can be integrated. Given this integration, various forms of visualization and analysis techniques can be applied to gain a better understanding of the relationships among the various data sources, an understanding that can lead to deeper knowledge of brain function. The purpose of this paper is to describe some of these techniques and to show how they are being applied, in the UW Human Brain Project, to the study of language organization in the brain. DATA ACQUISITION AND INTEGRATION The advent of non-invasive functional imaging techniques allows language processing to be studied in living subjects. Our goal is to develop methods for integrating these and other forms of language data, and to organize them in an information system that can be federated with other Brain Project sites. Data integration involves two major steps: (1) integration of multimodality data from a single patient, the subject of this paper, and (2) integration of data from multiple patients, a much more difficult problem that is a major research objective of the entire Human Brain Project. Data Acquisition. The primary source of data for the UW Human Brain Project is collected in preparation for, and during neurosurgery for intractable epilepsy. The objective of this surgical procedure is to resect cortical areas of the temporal lobe that generate and propagate epileptic seizures, while sparing the patient s language function as much as possible. In planning this procedure, the patient undergoes a number of clinical tests. MR scans acquired prior to surgery yield three image volumes, corresponding to cortical anatomy, veins and arteries. During neurosurgery, electrical stimulation of the cortex is applied while the awake patient performs verbal tasks. The objective of this mapping procedure is to pinpoint cortical areas critical for language function. Resecting such areas would significantly impair the patient s speech function, and should be avoided. On the other hand, resecting non-language-specific areas would be much less disruptive in this respect, while still providing relief of the patient s severe epileptic condition. In addition, data from other modalities are collected. These data include, for example, phonograms of the patient s speech during intraoperative electrical stimulation of particular sites. Recently, functional MRI studies were added to the protocol on an experimental basis. This procedure is performed at the same time as the other three MR imaging procedures (anatomy, veins and arteries), while the patient s head remains immobilized in the MR scanner. In the fmri procedure the patient performs a language-related task (e.g. silent object naming) that is as close as possible to the task that will be performed during stimulation mapping at neurosurgery. This task is alternated with a control task,

and MRI images are acquired during both tasks. Individual voxels are statistically compared for significant change between the control and language tasks, and a difference image volume is constructed that shows those voxels whose activation is greater than a probability threshold. The resulting volume is output in the same format as the other MR image volumes. Functional MRI potentially offers great advantages in surgical planning since it could eliminate the need for stimulation mapping. It could also be of great use in studying language function in normal subjects. However, fmri is still a highly experimental protocol in which the location and amount of observed activation is highly variable from one study to the next, and from one protocol to the next. The possibility of directly comparing fmri with the gold standard yielded by surgical stimulation provides a unique opportunity to develop calibration methods for fmri. However, before these sources of data can be compared they must be integrated and visualized in terms of an underlying anatomical model. Data Integration. The primary method of integration is to construct a patient-specific model of the brain, obtained from the structural MR scans. The model is created using a shape-based approach, in which a low-resolution generic cortical shape model, learned from a training set, guides low level image processing operators in their search for the brain surface. The resulting shell is then used as a mask to remove non-cortical structures, and to provide a starting point for more detailed surface extraction 1. The three other MR image volumes (veins, arteries and fmri) are registered to the anatomy MR volume, and hence to the extracted anatomical model, by assuming minimal patient motion, and by expressing all voxels in terms of the center of the fixed magnet. This initial registration can then be adjusted manually to account for small amounts of patient movement. (We are currently developing techniques to do this, although we have only observed minimal movement in the 29 patients imaged to-date.) Once the four image volumes are aligned, the low resolution cortical shell is used to mask non-cortical veins and arteries from the vein and artery volumes, and to remove fmri artifacts (such as that due to facial muscle movement) from the fmri data. A different technique is required for integrating the surgical stimulation data, since these data are not MR-based. As described elsewhere 1, we use a visualization-based approach: the cortical surface model is combined with the MR-based models of the cortical veins and arteries, and rendered so as to visually match as closely as possible a photograph taken of the cortical surface exposed at neurosurgery. The veins and arteries provide landmarks that can be readily seen on the photograph, and which allow a user to match the detailed cortical anatomy on the photograph with that on the visualization. An interactive tool is then used to map the location of stimulation sites, as seen on the photograph, onto the reconstructed cortex. These locations are saved as 3D coordinates in the underlying magnet coordinate system. Stimulation sites found to be associated with language are recorded in the database. Repeatability studies for this procedure found that it was at least as accurate as the location of the surgical stimulation sites 2. VISUALIZATION OF LANGUAGE DATA The result of the above procedures is that the five sources of data (MR anatomy, MR veins, MR arteries, fmri, and stimulation sites; see Figure 1) are all expressed in terms of a patient-specific model of anatomy, which is in turn expressed in terms of the global magnet-based coordinate system. Simple transforms based on well-defined landmarks can then be applied to express the data in a brain-centric frame of reference, such as the commonly used Talairach coordinate system. The next problem is to visualize these data, either alone or in combination, in order to gain a qualitative appreciation of their relationship (if any). This visual appreciation will then inspire the development of analytical techniques that quantitatively analyze these relationships. Including fmri data requires new methods of data visualization. Functional MRI activation is typically observed below the surface of the brain, and often occurs in the deeper structures the bottom of the sulci, medial wall of the temporal lobe, etc. For this reason, fmri data cannot be readily compared with the stimulation map on the surface of the brain, or even shown on a 3D view along with the anatomical features of the brain surface reconstruction. We have implemented several new methods for visualizing the relationship between fmri and stimulation data. Projecting fmri Data onto the Surface of the Brain. In many cases the fmri activation sites are close to the cortical surface. For these cases we implemented highlighting of the surface areas that are located in the immediate vicinity of the fmri activation sites. For individual vertices on the brain surface mesh, the color was modulated depending on a weight function W reflecting the fmri intensity I in the vicinity of this point. For individual voxels, the weight was inversely proportional to the sum of squares of the distance between the point and the voxel, r, and the inter-electrode distance, d (5 mm): W = α I ( x, y, z) 2 2, ( ) r + d dxdydz

This feature permitted direct comparison of the fmri results and the stimulation map (Figure 2A). Visualizing Deep Structures. Projection of fmri data onto the brain surface reflects activation near the surface of the brain. However, activation also occurs in deep structures that may be spatially correlated with the surface sites. Deep activation is typically visualized by superimposing color-coded fmri intensities onto individual structural MR slices 3, and we have implemented this approach as well. However, static 2D images are of limited use when attempting to study functional activation patterns in 3D. We therefore developed tools for visualizing the 3D relationships of fmri to the other data sources. For example, we developed a tool to interactively scroll through the volume of the brain along any of the three dimensions, and to show the corresponding MR slice with respect to a 3D model. Since the reconstructed brain surface contains a large number (10 6 ) of polygons, it is currently not practical to render the full surface in interactive mode. Instead, we use the shell extracted in the shape-based segmentation step, which consists of only about 10 2 polygons, and a fast algorithm for combining the structural and functional images. As the user moves one of the scroll bars, he or she observes the changing 2D structural MR image with highlighted functional MR activation, as well as the location of the slice plane with respect to the low-resolution shell. Once the mouse is released the detailed surface model is rendered. To further explore the deep structures of the brain, we developed cut-away views of the brain surface reconstruction. Three orthogonal planes axial, sagittal and coronal are used to specify the region of the brain that is to be dissected and removed from the view to reveal the deeper brain structures. The user interactively adjusts the position of these planes with respect to the low-resolution shell. Combined structural and functional MR images are then texturemapped onto the dissecting planes to show the internal anatomy and functional activation in the deep structures (Figure 2B). Software Implementation. All integration and visualization software was developed using an in-house graphics toolkit, SKANDHA4, which combines a subset of Common Lisp useful for fast interactive programming and prototyping with the ability to add pre-compiled C-based primitive functions that significantly accelerate computationally demanding routines. Among other features, SKANDHA4 supports 3D graphics and Graphical User Interfaces, and includes a module for processing of MRI data. DISCUSSION In this report, we describe techniques for integrating and visualizing multi-modality language data in terms of a patient-specific model of anatomy. This kind of integration and visualization is becoming increasingly prevalent in brain research 3,4. In all cases the use of a patient-specific model of anatomy provides the substrate on which these disparate sources can be related. The integration of fmri with stimulation mapping data in our project offers an unprecedented opportunity to calibrate fmri against a known gold standard. As we acquire more patients in our database, it will become possible to first qualitatively visualize, then quantitatively analyze whether there is a relationship between these two data sources, and to tune the fmri acquisition parameters to increase the strength of any relationships that may be found. If strong correlations are observed consistently, then it will be possible to eliminate the need for stimulation mapping, an invasive and lengthy procedure. The procedures described in this paper implement just the first step in the integration process: relating multi-modality data for a single patient. The second step is much harder: relating data from multiple patients. Although many groups have developed techniques to relate deep structures using linear or nonlinear warps 4, it is much more difficult to relate surface structures because of the highly variable nature of the cortical surface. The development of these kinds of methods is a major focus of national Human Brain Project research. Acknowledgements This work was funded by Human Brain Project grant DC02310, National Institute of Deafness and Other Communication Disorders. We thank David Corina, Keith Steury, Ken Maravilla, George Ojemann and Kate Mulligan for various aspects of data acquisition. References 1. Hinshaw KP, Brinkley JF. Incorporating constraintcased shape models into an interactive system for functional brain mapping. J Am Med Inf Assoc Proceedings AMIA Symp Suppl 1998: 921-925. 2. Modayur BR, Prothero J, Ojemann, G, Maravilla K, Brinkley JF. Visualization-based mapping of language function in the brain. Neuroimage, 1997: 6: 245-258. 3. Van Essen DC and Drury HA. Structural and functional analyses of human cerebral cortex using a surface-based atlas. J Neurosci 1997: 17: 7079-7102. 4. Thompson PM and Toga AW. A surface-based technique for warping 3-dimensional images of the brain. IEEE Trans Med Imag 1996: 15: 11-16.

A B C D E Figure 1. Five sources of brain map data. A-D show sample renderings for surfaces extracted from MR-based image volumes obtained from the same patient s brain: A) cortex; B) arteries; C) veins; D) fmri activation sites. E) shows an intra-operative photograph of the patient s cortical surface exposed during surgery. Numbered tags are the location of sites stimulated for mapping of language function. A 1 cm scale bar is shown at the bottom of the photograph.

A B Figure 2. Visualization of five integrated sources of brain map data: cortex, arteries, veins, fmri and stimulation mapping sites (dots). A) Highlighted areas represent the fmri activation sites that are close to the surface. B) Cut-away view of the brain developed for visualizing functional activation in deep structures.