QAV-PET: A Free Software for Quantitative Analysis and Visualization of PET Images



Similar documents
multimodality image processing workstation Visualizing your SPECT-CT-PET-MRI images

3D Viewer. user's manual _2

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

IntelliSpace PACS 4.4. Image Enabled EMR Workflow and Enterprise Overview. Learning Objectives

DICOM Viewer 3.0 Instruction for Use

Quick Start Tutorials

A Short Introduction to Transcribing with ELAN. Ingrid Rosenfelder Linguistics Lab University of Pennsylvania

Table of Contents. Part I Welcome. Part II Introduction. Part III Getting Started. Part IV The User Interface. Part V Quick Start Tutorials

The main imovie window is divided into six major parts.

TIBCO Spotfire Business Author Essentials Quick Reference Guide. Table of contents:

Clinical Training for Visage 7 Cardiac. Visage 7

Instructions for creating a data entry form in Microsoft Excel

How to make a line graph using Excel 2007

Applying a circular load. Immediate and consolidation settlement. Deformed contours. Query points and query lines. Graph query.

Updated CellTracker software manual

Quick use reference book

Gephi Tutorial Quick Start

Tutorial: Biped Character in 3D Studio Max 7, Easy Animation

Windows Movie Maker 2012

Image Analysis Using the Aperio ScanScope

Houston Region Diesel Engine Database Minimum System Requirements Installation Instructions Quick Start Guide version 0.1

Creating a Guided Tour with Google Earth

CTvox for Android. Version 1.5

Lab 2: MS ACCESS Tables

resource Osirix as a Introduction

Imaris Quick Start Tutorials

WebViewer User Guide. version PDFTron Systems, Inc. 1 of 13

Files Used in this Tutorial

Step-by-Step Guide to Bi-Parental Linkage Mapping WHITE PAPER

USER MANUAL Detcon Log File Viewer

Hierarchical Clustering Analysis

MERGE PACS : PRODUCT ROADMAP. Dennis Roscoe, Ph.D., Director PACS Solution Management, Merge Healthcare Tuesday, August 28, 2012

CATIA Basic Concepts TABLE OF CONTENTS

Turtle Beach Grip 500 Laser Gaming Mouse. User Guide

Project Setup and Data Management Tutorial

How to use PGS: Basic Services Provision Map App

Data Visualization. Brief Overview of ArcMap

Supervised Classification workflow in ENVI 4.8 using WorldView-2 imagery

SECTION 2-1: OVERVIEW SECTION 2-2: FREQUENCY DISTRIBUTIONS

Snagit 10. Getting Started Guide. March TechSmith Corporation. All rights reserved.

MetroBoston DataCommon Training

MR Urography Plug-in for ImageJ User s Guide

CENTRICITY WEB VERSION 3.0. Updated 11/24/2009 Medical Imaging IT Technical Support please call the Help Desk

MS Excel Template Building and Mapping for Neat 5

Why are we teaching you VisIt?

Intellect Platform - Tables and Templates Basic Document Management System - A101

This tutorial assumes that Visual3D has been installed and that a model has been created as described in Tutorial #1.

Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint

Data Director Create a New Answer Sheet

DATA VISUALIZATION WITH TABLEAU PUBLIC. (Data for this tutorial at

Working With Animation: Introduction to Flash

JustClust User Manual

MultiExperiment Viewer Quickstart Guide

The very basic basics of PowerPoint XP

3D-GIS in the Cloud USER MANUAL. August, 2014

Business Warehouse Reporting Manual

Advanced Microsoft Excel 2013

Within minutes you can transform your high-resolution digital image in to a DVquality movie that pans, zooms and rotates the once-still image.

To export data formatted for Avery labels -

AIM Dashboard-User Documentation

Microsoft PowerPoint 2008

Creating Fill-able Forms using Acrobat 8.0: Part 1

Mosaicking and Subsetting Images

Lesson 3 - Processing a Multi-Layer Yield History. Exercise 3-4

PyRy3D: a software tool for modeling of large macromolecular complexes MODELING OF STRUCTURES FOR LARGE MACROMOLECULAR COMPLEXES

Drawing a histogram using Excel

When you made a change just before you quit, junxion will still ask you if you want to save those changes.

3D Analysis Software Ez3D

IncuCyte ZOOM Whole Well Imaging Overview

Chapter 14: Links. Types of Links. 1 Chapter 14: Links

2014 Simplify3D. Quick Start Guide

Metatrader 4 Tutorial

SOFTWARE FOR 3D IMAGE VISUALISATION, ANALYSIS AND MODEL GENERATION

Help. Contents Back >>

Character Creation You can customize a character s look using Mixamo Fuse:

Getting Started With Mortgage MarketSmart

Visualization with Excel Tools and Microsoft Azure

Why Use OneNote? Creating a New Notebook

Chapter 15: Forms. User Guide. 1 P a g e

GoodReader User Guide. Version 1.0 GoodReader version

CREATING EXCEL PIVOT TABLES AND PIVOT CHARTS FOR LIBRARY QUESTIONNAIRE RESULTS

Sample Table. Columns. Column 1 Column 2 Column 3 Row 1 Cell 1 Cell 2 Cell 3 Row 2 Cell 4 Cell 5 Cell 6 Row 3 Cell 7 Cell 8 Cell 9.

Ortelia Space Builder User Manual

Basic Pivot Tables. To begin your pivot table, choose Data, Pivot Table and Pivot Chart Report. 1 of 18

Microsoft PowerPoint 2011

Visualizing molecular simulations

Computer Basics: Tackling the mouse, keyboard, and using Windows

Gear View Basic. User Help. Version Written by: Product Documentation and R&D Date: August 2015 LX-DOC-GVB UH-EN-REVA

RDM+ Remote Desktop for Android. Getting Started Guide

Mouse and Pointer Settings. Technical Brief

Exiqon Array Software Manual. Quick guide to data extraction from mircury LNA microrna Arrays

Using BlueHornet Statistics Sent Message Reporting Message Summary Section Advanced Reporting Basics Delivery Tab

Access 2007 Creating Forms Table of Contents

SMART Ink 1.5. Windows operating systems. Scan the following QR code to view the SMART Ink Help on your smart phone or other mobile device.

Spatial Adjustment Tools: The Tutorial

Introduction to MyAvatar Learning how to use the tool

Microsoft Access 2010 handout

Transcription:

QAV-PET: A Free Software for Quantitative Analysis and Visualization of PET Images Brent Foster, Ulas Bagci, and Daniel J. Mollura 1 Getting Started 1.1 What is QAV-PET used for? Quantitative Analysis and Visualization of PET Images (QAV-PET) is an opensource software implemented in the popular MATLAB coding environment that allows easy, intuitive, and efficient visualization and quantification of multimodal medical images. In particular, the software is well suited for PET-CT as well as MRI-PET images. It allows multi-modal images to be viewed simultaneously which allows the research to incorporate information that is not available on a single modality image for improved quantification and analysis. Additionally, the software includes a framework for quantification and analysis which can be easily manipulated and tuned to fit any medical imaging application. In the following sections we outline the framework and how to use this software. 1.2 Downloading and Installation Instructions The software is freely available on the MATLAB file exchange. Once the.zip file is downloaded, un-zip the folder and open up MATLAB. Once MATLAB has launched, set the current folder to the un-zipped folder. Launch the GUI by right clicking on the F usedsuv.m file and left click on run. 2 Visualization 2.1 Opening the Images After the GUI is run, a pop-up menu will appear asking the user to first select the functional image file (PET) in the analyze (.hdr and.img) or DICOM format. After the file is selected, press the OK button. After several seconds (may be up to about 20 seconds if the image file is very large) another pop-up menu will appear asking for the anatomical image (CT or MRI) file in either analyze or DICOM format. 1

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 2 Fig. 1: The QAV-PET GUI with a PET-CT of a rabbit infected with tuberculosis shown. Once the GUI is launched, you will see something very similar to Figure 1. The default display shows a fusion between the anatomical and functional images. There is a drop-down menu at the top right which shows the current image being viewed. Currently it is Show Fused. 2.2 Adjusting Contrast and Brightness 2.2.1 Anatomical or Functional The contrast and brightness can be easily and intuitively adjusted by only using the mouse buttons and cursor motion. First, the brightness is controlled by holding down the left mouse button and moving the mouse cursor vertically. Secondly, the contrast is adjusted by holding down the left mouse button and dragging the cursor horizontally, see Figure 2. If at any time the user wishes to return back to the default contrast and brightness there is a button titled Contrast in the bottom left of the gui. 2.2.2 Fused Image Adjusting the econtrast and brightness is slightly different for the image fusion case. The reason for this is to allow the user to quickly and easily change the opacity between the functional and anatomical images by using the mouse. In this case, holding the left mouse button and moving it vertically adjusts the

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 3 Fig. 2: Adjusting the contrast and brightness for the functional and anatomical view cases. Fig. 3: Example of varying the opacity between PET and CT on the segmentations from Fig. 4. The opactity ranges from 0, fully anatomical (CT) information, to 1, fully functional (PET) information. opacity between the anatomical and functional images. Holding the left mouse button and moving the cursor horizontally adjusts the threshold that is applied to the functional image, essentially adjusting the contrast and brightness of the PET image. This threshold is applied to the PET image in order to remove the background areas for improved visability of the underlying anatomical image. Addtionally, the opacity can be adjusted by the slider in the bottom left of the gui. You will notice that the slider moves in relation to the opacity change made by the mouse movement. See Figure 3 for an example of the effects on the fusion view on a fused PET-CT image. 2.3 Zoom and Panning Now that the contrast, brightness, and opacity can be adjusted for optimal viewing, the zoom and pan features allows the user to focus on only the most

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 4 important sections of the image. Both the zoom and panning can be activated in two different ways. First, there is a mouse button plus cursor motion mode. For panning, the button is the right mouse button and for the zooming the button is the middle mouse button. Holding these buttons while moving the mouse cursor will adjust the pan and zoom. Secondly, there are two buttons in the bottom middle of the GUI named PAN and ZOOM which, after clicking, activates the default zoom and pan features within MATLAB. They follow the standard behavior that one would expect in MATLAB. Tip: Please note that if the user uses either the zoom or pan buttons within the GUI, the user must click on the respective button again to turn the zoom or pan feature of MATLAB off before the contrast, brightness and other features that utilize the cursor movement can be used. Table of the buttons and their result: Keys Q Space Bar Control Left Mouse Middle Mouse Right Mouse Mouse Wheel Result Cycle between Functional, Anatomical, and Fused Add ROI to the current image label Subtract from the current image label Drag vertically to change brightness Drag horizontally to change contrast Drag to zoom in and out Drag to pan up, down, left, and right Scroll to change the current slice shown 2.4 Changing Current Slice The current slice can be changed by use the mouse wheel or by clicking and dragging on the GUI slider in the bottom middle of the GUI. 2.5 Changing Image Shown Functional, Anatomical, Fused As stated previously, the default view is a fusion of the anatomical and functional images. There is a drop-down menu at the top right which shows the current image being shown. There are three ways of changing the current image view in QAV-PET. First, the drop-down menu can be used to select the image to be shown. Secondly, the user can press the Q key which automatically cycles between the three image views functional, anatomical, and fused. This allows the user to quickly cycle through the imaging views while segmenting the region of interest in the images. This key can be easily changed in the Settings button in the top right of the GUI. Lastly, the user can left mouse click on the images on the top left which shows the anatomical and functional image of the currently selected slice. This opens up the respective imaging view that was chosen.

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 5 Fig. 4: Example of a segmentation involving 3 labels. Left: A rough ROI is created by the user. Middle: The AP PET segmentation is applied to the rough ROI for refinement of the boundary. Right: A hybrid rendering of the segmentations, with funcational information overlaid, is provided. The redder areas correspond to higher radiotracer uptake areas while bluer areas have less uptake. 3 Segmentation 3.1 Rough ROI Definition A rough region of interest (ROI) can be easily added to the images. First, the user selects an image label in the listbox on the left. The first label created is named Label 1 k. Adding a new label, deleting a label, and changing a label s color will be outlined in the following subsections. In order to add an ROI to the currently selected label, the user can click on the red Add ROI button. Then, left mouse click in the main image view and, while holding the left mouse button down, drag the cursor around the region of interest. Once completed, release the left mouse button and the ROI will be changed to the color of the currently selected label. To subtract from the current label, the user can click on the red Subtract ROI button and follow the same procedure as before. Additionally, in order to speed up the process of adding and subtracting ROIs we added another way instead of clicking on the red buttons each time. If the user clicks the Space Bar, this is essentially the same as clicking the Add ROI button while the Control key is the same as the red Subtract ROI button. These default keys can be changed to other keyboards keys by using the Settings button in the top right of the GUI. These buttons allow for very efficient adding and subtracting to the ROIs. 3.2 Interpolation Interpolation can be done between two ROIs on the same image label on two different slices. Then, the user should click on the Interpolate button in the right of the GUI and the software will interpolate between the two slices. Inter-

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 6 polation between slices of the ROI allows for quick and efficent ROI definition by the user. 3.3 ROI Refinement 3.3.1 Affinity Propagation (AP) Image Segmentation Once the rough ROI is created, the segmentation can be automatic refined using a segmentation algorithm named Affinity Propagation (AP) PET image segmentation [1, 2]. The automatic segmentation algorithm allows the user to refine the rough manual segmentations to the exact boundary of the high uptake region which increases accuracy and lowers variability while also decreasing total segmentation time. It was shown in [1] that the AP segmentation algorithm is more accurate than the current state-of-the-art PET image segmentation methods for diffuse, multi-focal uptake regions commonly seen in pulmonary infections such as tuberculosis. Very briefly, the segmentation method estimates and smoothes the histogram of the region of interest via kernel density estimation [3] to estimate and remove the points along the histrogram that are most likely to be noise. Then, a novel similarity (affinity) function is applied to the histogram to estimate the similarity of the points along the histogram with the assumption that points that are most similar are more likely to belong to the same classification, i.e. image label. Lastly, the Affinity Propagation clustering algorithm [4] is applied to these similiarities to automatically calculate the number and membership of the classes within the image. All parameters of the AP image segmentation algorithm can be easily adjusted, if needed, within the software. The rough ROI definition followed by a fully automated segmentation algorithm dramatically decreases the total segmentation time and inter- and intra-observer variability while also increasing overall accuracy. 3.4 Image labels ROI labels can be easily added to have multiple ROI definitions giving the researcher the ability to seperate and study pathological high uptake regions individually. This is done by clicking on the Add Label button on the left side of the GUI. The letter that is after the label name is the color of the label. If the user wishes to change the color of a particular label, this can be easily accomplished by using the Rename button and changing the letter to the first letter of the desired color. For a list of all possible colors, please check the MATLAB documentation; however, a number of basic colors are allowed. 4 Quantification Region of Interests (ROIs) must be defined before any type of quantification or analysis can take place. Essentially, the ROI tells the computer where the

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 7 object of interest is on the image so the image properties of this region, such as the highest Standardized Uptake Value (SUV max ), can be computed. The software allows rough ROIs to be generated by the user followed by automatic refinement (segmentation), as described in the following subsection. For this initial rough ROI, the user first selects the label to assign the object of interest to such as Label 1, 2, 3, etc. Then, the software tracks the position of the mouse cursor while the user manually draws the boundary of the object of interest. Once an ROI is created, the statistics of the defined region can be computed and displayed as a table or exported as a.csv file. These statistics include the volume, SUV max, and SUV mean of all the slices for the given label which have been shown to be of particular importance for quantifying the pathologies in infectious lung diseases in logitudinal small animal models [5]. 4.1 Getting Label Statistics A table of the imaging statistics of the current image can be obtained by clicking on the Material Statistics button underneath the label listbox. A pop-up menu will appear asking if the user wants these values in a MATLAB generated table (for easy copying and pasting or processing within MATLAB) or does the user wish to save the table as a.csv file (for opening in Excel or similar software). The SUV max, SUV mean, and mean hounsfield units (CT mean (HU)) is displayed at the top of the GUI for the particular slice selected. In addition, the SUV at a particular point can be investigated by clicking on the SUV button on the bottom left of the main imaging viewer. This allows the researcher to get the SUV at a particular point on the image regardless of whether the PET image is currently show or not. This can be potentially very useful when determining the radiotracer uptake on a particular anatomical abnormality. 4.2 Auto-reporting and Renderings This software has a novel auto-reporting feature which, in an automated fashion, produces a report which includes the most important information needed for quantifying the disease and high uptake regions. It is meant to complement, not replace, the exported quantification as a.csv file. After hitting the report button in the software, the user selects whether to create a report for each label individually or to create a single report for all the labels, essentially considering all the labels as one single label. The report includes both quantitative and qualitative data. The quantitative data includes the SUV max, SUV mean, and the volume of the current label, and, additionally, it provides the location of the SUV max on the axial, sagittal, and coronal view of the PET image, the CT image, and the PET-CT fused image for a total of 9 images on the first page of the report. All of this information allows the user to get a quick view of the highest uptake lesion, which is important for disease severity quantification. For an even better visualization, the boundary of the current uptake region is colored as the same color as the label the user used

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 8 Fig. 5: Example page from the auto-generated report. On top, the title includes the lesion number along with the Volume, SUV max, and SUV mean for easy comparision between the various high uptake regions. The first row shows the location of the SUV max with a star on the axial image while the second and third rows show the sagittal and coronal slices respectively. The PET image (first column), CT image (second column), and registered PET-CT image (third column) are provided in the first, second, and third column respectively.

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 9 to create the segmentation for continuity between the segmentation creation and report generation. See Figure 5 for an example of the first page of the report. In addition to these views, the software provides a three diminsonal representation of the segmented high radiotracer uptake region by generating a rendering of the segmented lesions. First, it creates a rendering using the boudary and shape information from the labels. Then, the renderings are overlaid with the functional information from the PET image to give the distribution of the radiotracer uptake along the surface of the uptake region in order to visualize the information in a way that is simply not possible by only looking at 2D images. We refer to this type of rendering as a hybrid rendering since it contains information from different imaging modalities fused together, functional and anatomical. An example of this rendering from three different labels is shown in Figure 4. The redder areas correspond to higher uptake areas (more severe disease) while bluer areas have less uptake. Notice how easily the distribution of the radiotracer uptake is visualized. Lastly, the report generates a rendering of all the various labels and colored using the same coloration and 3D coordinates as when the user was segmenting the regions. This allows the user to get a good sense of the spatial distribution of the various labels within the images. 5 Concluding Remarks There are many potential applications for the QAV-PET open source software, and only a few will be highlighted here. Perhaps the most direct application is for the quantification and visualization of abnormalities on PET-CT images of small animal infectious disease studies. The segmentation algorithm that is implemented has been shown to be particularly well-suited for the segmentation of diffuse, multi-focal PET radiotracer uptakes that are commonly seen with infectious diseases, and the intuitive and efficient work flow of the software will aid researchers in analysing these types of images since there is currently no software which allows for easy segmentation of abnormalities on PET-CT images. Another potential application for this software is for visualization of the pathologies in three dimensions with the PET functional information overlaid for determining the optimal histology slice localization. When taking histology slices from a diseased organ, this three dimensional view of the distribution of the disease will aid researchers in determining the best location to take the histology slices from for the best characterization. Lastly, the open source nature of the software will allow researchers to quickly add features or modify existing features for novel applications. For instance, if the user needs to segment abnormalities based upon the fused PET and CT information a novel segmentation algorithm such as [6] can be easily added to the image analysis and quantification framework presented here. The software and code is freely available on the Matlab File Exchange, a popular and free-to-downloaded site for MATLAB software. This software allows researchers to easily and intuitively visualization, segment, render, and automatically generate a report that can be used in the analysis of medical im-

Foster et al. Quantitative Analysis and Visualization of PET Images (QAV-PET) 10 ages. If you should have any questions or comments please feel free to contact the authors. Contact Us Please feel free to contact us with any questions or comments you may have at Ulas Bagci at ulasbagci@gmail.com, Ulas.Bagci@nih.gov or Brent Foster at bfoster790@gmail.com or Brent.Foster@nih.gov. References [1] B. Foster, U. Bagci, Z. Xu, B. Dey, B. Luna, W. Bishai, S. Jain, and D. J. Mollura, Segmentation of pet images for computer-aided functional quantification of tuberculosis in small animal models, IEEE Transactions on Biomedical Engineering, vol. 61, pp. 711 724, 2013. [2] B. Foster, U. Bagci, B. Luna, B. Dey, W. Bishai, S. Jain, Z. Xu, and D. J. Mollura, Robust segmentation and accurate target definition for positron emission tomography images using affinity propagation, in 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), 2013, pp. 1461 1464. [3] Z. Botev, J. Grotowski, and D. Kroese, Kernel density estimation via diffusion, The Annals of Statistics, vol. 38, no. 5, pp. 2916 2957, 2010. [4] B. Frey and D. Dueck, Clustering by passing messages between data points, Science, vol. 315, no. 5814, pp. 972 976, 2007. [5] U. Bagci, B. Foster, K. Miller-Jaster, B. Luna, B. Dey, W. R. Bishai, C. B. Jonsson, S. Jain, and D. J. Mollura, A computational pipeline for quantification of pulmonary infections in small animal models using serial pet-ct imaging, EJNMMI research, vol. 3, no. 1, pp. 1 20, 2013. [6] U. Bagci, J. K. Udupa, N. Mendhiratta, B. Foster, Z. Xu, J. Yao, X. Chen, and D. J. Mollura, Joint segmentation of anatomical and functional images: Applications in quantification of lesions from pet, pet-ct, mri-pet, and mripet-ct images, Medical image analysis, vol. 17, no. 8, pp. 929 945, 2013.