SCATTERED DATA VISUALIZATION USING GPU. A Thesis. Presented to. The Graduate Faculty of The University of Akron. In Partial Fulfillment



Similar documents
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR

Graphics Cards and Graphics Processing Units. Ben Johnstone Russ Martin November 15, 2011

GPU Computing with CUDA Lecture 2 - CUDA Memories. Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile

NVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist

OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA

GPU Parallel Computing Architecture and CUDA Programming Model

ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU

ultra fast SOM using CUDA

GPU Hardware and Programming Models. Jeremy Appleyard, September 2015

Introduction to GPU hardware and to CUDA

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries

Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui

FPGA-based Multithreading for In-Memory Hash Joins

HPC with Multicore and GPUs

Computer Graphics Hardware An Overview

E6895 Advanced Big Data Analytics Lecture 14:! NVIDIA GPU Examples and GPU on ios devices

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Introduction to GP-GPUs. Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1

Volume visualization I Elvins

Common Core Unit Summary Grades 6 to 8

Stream Processing on GPUs Using Distributed Multimedia Middleware

GPU Computing with CUDA Lecture 4 - Optimizations. Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile

Introduction to GPGPU. Tiziano Diamanti

FPGA area allocation for parallel C applications

GPU File System Encryption Kartik Kulkarni and Eugene Linkov

CUDA Basics. Murphy Stein New York University

Parallel Image Processing with CUDA A case study with the Canny Edge Detection Filter

GRADES 7, 8, AND 9 BIG IDEAS

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Texture Cache Approximation on GPUs

Learn CUDA in an Afternoon: Hands-on Practical Exercises

HistoPyramid stream compaction and expansion

ANALYSIS OF RSA ALGORITHM USING GPU PROGRAMMING

Overview Motivation and applications Challenges. Dynamic Volume Computation and Visualization on the GPU. GPU feature requests Conclusions

Speeding Up RSA Encryption Using GPU Parallelization

Accelerating BIRCH for Clustering Large Scale Streaming Data Using CUDA Dynamic Parallelism

Performance Level Descriptors Grade 6 Mathematics

GPGPU Computing. Yong Cao

A Short Introduction to Computer Graphics

QCD as a Video Game?

Real-time Visual Tracker by Stream Processing

3D Distance from a Point to a Triangle

Optimizing Application Performance with CUDA Profiling Tools

Control 2004, University of Bath, UK, September 2004

New Hash Function Construction for Textual and Geometric Data Retrieval

Binary search tree with SIMD bandwidth optimization using SSE

Packet-based Network Traffic Monitoring and Analysis with GPUs

Parallel Prefix Sum (Scan) with CUDA. Mark Harris

Parallel Large-Scale Visualization

Scalability and Classifications

Parallel Simplification of Large Meshes on PC Clusters

APPLICATIONS OF LINUX-BASED QT-CUDA PARALLEL ARCHITECTURE

Interactive Level-Set Deformation On the GPU

A Fast Scene Constructing Method for 3D Power Big Data Visualization

Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA

A Theory of the Spatial Computational Domain

Next Generation GPU Architecture Code-named Fermi

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

Recent Advances and Future Trends in Graphics Hardware. Michael Doggett Architect November 23, 2005

GeoImaging Accelerator Pansharp Test Results

Parallel Programming Survey

CUDA programming on NVIDIA GPUs

How To Create A Surface From Points On A Computer With A Marching Cube

GPU Accelerated Monte Carlo Simulations and Time Series Analysis

Hands-on CUDA exercises

Introduction to Computer Graphics

RevoScaleR Speed and Scalability

Glencoe. correlated to SOUTH CAROLINA MATH CURRICULUM STANDARDS GRADE 6 3-3, , , 4-9

GPU-BASED TUNING OF QUANTUM-INSPIRED GENETIC ALGORITHM FOR A COMBINATORIAL OPTIMIZATION PROBLEM

GPU-Based Network Traffic Monitoring & Analysis Tools

Introduction GPU Hardware GPU Computing Today GPU Computing Example Outlook Summary. GPU Computing. Numerical Simulation - from Models to Software

Clustering Billions of Data Points Using GPUs

The Evolution of Computer Graphics. SVP, Content & Technology, NVIDIA

Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

Image Compression through DCT and Huffman Coding Technique

NVIDIA Tools For Profiling And Monitoring. David Goodwin

CUDA Optimization with NVIDIA Tools. Julien Demouth, NVIDIA

HIGH PERFORMANCE CONSULTING COURSE OFFERINGS

September 25, Maya Gokhale Georgia Institute of Technology

Algebra Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard

GPU Architecture. Michael Doggett ATI

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *

Accelerating Wavelet-Based Video Coding on Graphics Hardware

Muse Server Sizing. 18 June Document Version Muse

Evaluating HDFS I/O Performance on Virtualized Systems

Pre-Algebra Academic Content Standards Grade Eight Ohio. Number, Number Sense and Operations Standard. Number and Number Systems

Network Traffic Monitoring and Analysis with GPUs

Process Modelling from Insurance Event Log

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

L20: GPU Architecture and Models

Massive Streaming Data Analytics: A Case Study with Clustering Coefficients. David Ediger, Karl Jiang, Jason Riedy and David A.

Higher Education Math Placement

Employing Complex GPU Data Structures for the Interactive Visualization of Adaptive Mesh Refinement Data

A New Approach to Cutting Tetrahedral Meshes

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Optimizing GPU-based application performance for the HP for the HP ProLiant SL390s G7 server

Multi-GPU Load Balancing for In-situ Visualization

Biggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Transcription:

SCATTERED DATA VISUALIZATION USING GPU A Thesis Presented to The Graduate Faculty of The University of Akron In Partial Fulfillment of the Requirements for the Degree Master of Science Bo Cai May, 2015

SCATTERED DATA VISUALIZATION USING GPU Bo Cai Thesis Approved: Accepted: Advisor Dr. Yingcai Xiao Dean of the College Dr. Chand Midha Committee Member Dr. Tim O Neil Interim Dean of the Graduate School Dr. Rex D. Ramsier Committee Member Dr. Zhong-Hui Duan Date Department Chair Dr. Timothy Norfolk ii

ABSTRACT Scattered data visualization is commonly used in engineering applications. We usually employ a two-step approach, data modeling and rendering, in visualizing scattered data. Performance and accuracy are two important issues in scattered data modeling and rendering. This project developed a GPU-accelerated scattered data visualization system. The Shepard s method was used to interpolate scattered data into a 3D uniform grid and The Marching Cubes method was used in rendering the intermediate grid. Techniques, such as Localized Data Modeling, Static Local Block Data Modeling and Dynamic Local Block Data Modeling, were tested to measure their performance and accuracy. Experiments have been conducted with real world data on a GPU-accelerated scattered data visualization system. The speed-up observed on a GPU (NVidia GeForce GT 525M) is 12 to 27 times faster than on a CPU (Intel Core i5-2410m 2.30 GHz). Increasing the value of α in the Shepard s method can improve accuracy without causing performance penalty. Localization can reduce modeling error but causes performance penalty. Dynamic Block Localization can increase modeling accuracy significantly, but it has a large speed penalty due to frequent data shifts among GPU memory banks. Static iii

Block Localization, on the other hand, has a smaller performance penalty, but also shown smaller accuracy improvement. The parallel efficiency of the system is low (0.1314 to 0.2764). Future work includes studying issues related to GPU memory bank conflicts to increase the efficiency and investigating more GPU-accelerated data interpolation methods for their accuracy and performance. iv

ACKNOWLEDGEMENTS I am very thankful to my parents, for encouraging and supporting me in pursuing my master degree and also making this thesis possible. I would like to acknowledge the professor who inspired me throughout my master s program. Dr. Yingcai Xiao, thank you very much for inspiring me by giving me guidance and supporting me throughout the program. Dr. Zhong-Hui Duan and Dr. Tim O Neil, thank you very much for being my thesis committee members and supporting me to accomplish this thesis. I also would like to acknowledge the faculty in the Department of Computer Science, Dr. En Cheng, Dr. Chien-Chung Chan, Dr. Kathy Liszka and Dr. Michael L. Collard for their help during my Master s degree study. Their help has directly or indirectly contributed towards the accomplishment of this thesis. v

TABLE OF CONTENTS Page LIST OF TABLES... viii LIST OF FIGURES... ix CHAPTER I. INTRODUCTION... 1 1.1 Motivation... 1 1.2 Survey of Previous Work... 2 1.3 Outline of the Thesis... 3 II. BACKGROUND... 4 2.1 Scattered Data Modeling and Visualization... 4 2.2 Data Interpolation Method... 4 2.3 Data Visualization Method... 6 III. DESIGN... 7 3.1 Localized Data Modeling... 7 3.2 Design of GPU-based Modeling Algorithm... 8 vi

3.3 Design of GPU-based Visualization Algorithm... 11 IV. IMPLEMENTATION... 14 4.1 Implementation of Localization Shepard s Method on GPU... 14 4.2 Implementation of Marching Cubes Algorithm on GPU... 16 V. RESULTS AND ANALYSES... 18 5.1 Global Method Comparisons between CPU and GPU... 18 5.2 Accuracy Comparisons between Localized Global Method and Non-localized Global Method... 24 5.3 Performance Comparison between Static Local Block Data Modeling Method and Dynamic Local Block Data Modeling Method... 35 5.4 Over All Speed Up and Error Report Analyses... 38 VI. CONCLUSION AND FUTURE WORK... 40 REFERENCES... 42 vii

LIST OF TABLES Table Page 5.1 Comparing CPU and GPU for Global Shepard s Method for Various Grid Sizes Runtime... 19 5.2 GPU Global Method Detailed Runtime.... 20 5.3 Data Communication Time, Size and Speed for Various Grid Sizes... 22 5.4 Data Communication Size and Speed for Grid Size 64*64*64 to 128*128*128... 23 5.5 Data Communication Size and Speed for Grid Size 80*80*80 to 82*82*82... 23 5.6 Comparing Static Local Block Data Modeling and Dynamic Local Block Data Modeling Method Running Time for Various Grid Sizes... 37 viii

LIST OF FIGURES Figure Page 5.1 Graphical Representation of Non-localized Global Method Data Modeling Result.. 25 5.2 Graphical Representation of Non-localized Global Method Data Modeling Numerical Error... 26 5.3 Graphical Representation of Non-localized Global Method Data Modeling Relative Error... 26 5.4 Graphical Representation of Localized Global Method Data Modeling Result... 27 5.5 Graphical Representation of Localized Global Method Data Modeling Numerical Error... 28 5.6 Graphical Representation of Localized Global Method Data Modeling Relative Error... 28 5.7 Graphical Representation of Improved Shepard s Method by α = 2 Result... 29 5.8 Graphical Representation of Improved Shepard s Method by α = 2 Numerical Error 30 5.9 Graphical Representation of Improved Shepard s Method by α = 2 Relative Error.. 31 5.10 Graphical Representation of Improved Shepard s Method by α = 10 Result... 32 5.11 Graphical Representation of Improved Shepard s Method by α = 10 Numerical Error... 32 5.12 Graphical Representation of Improved Shepard s Method by α = 10 Relative Error... 33 5.13 Improved Shepard s Method by α = 11 Relative Error...34 5.14 RMS of Different Data Modeling Methods... 35 ix

5.15 Graph to Compare the Speed Between Static Local Block Data Modeling and Dynamic Local Block Data Modeling Method... 38 x

CHAPTER I INTRODUCTION 1.1 Motivation High performance scattered data visualizations are in great demands in many engineering applications. Examples of such applications can be found in environmental studies, oil exploration and mining. Volume visualization of scattered data is difficult due to the limited sampling rate and the scattered nature of the data [1]. This involves difficulty and high costs of conducting site investigations to acquire scattered data. It is typically we tend to collect sampling points from suspected areas of concentration. Hence it is difficult to form a 3D grid with scattered sampling points and traditional grid-based visualization techniques cannot be employed to visualize such data. As a result we usually apply a two-step approach. The first step is to perform modeling on the scattered sample data to form a 3D uniform grid. Each grid node has an interpolated data value. Conventional grid-based visualization techniques are then applied on this intermediate grid in the second step, i.e, the rendering step [2]. Traditional CPU-based computing methods are dominant in the modeling field. Even though the CPU has developed very quickly in the past several decades, it still cannot catch up with modern modeling computational demand [3]. Similarly, interactive 1

visualization has a higher computation demand. In our project, we aim to speed up both steps the modeling and rendering by parallelizing both the modeling and visualization processes using CUDA parallel processing. 1.2 Survey of Previous Work CPU-based interpolation methods have been used in the modeling process of scattered data visualization for years. The ideas of scattered data visualization correctness dilemma and local constraint are presented [4]. The advent of GPU CUDA parallel processing has led to research in many areas. Scattered data visualization is one of the most suitable research areas for parallel processing. The advantages of GPU CUDA parallel processing can benefit both the scattered data modeling process and the visualization process. For the GPU based scattered data modeling part, a GPU based scattered data modeling system was developed [5], Where the author implemented four GPU based scattered data modeling methods, Shepard s method, Multiquadric method, Thin-plate-spline method, and Volume Spline method. In [13] this GPU based scattered data modeling system was migrated into various platforms such as GPU GTX480, GPGPU Tesla C2070, and An Amazon Web Service Cloud-Based GPGPU instance [13]. For GPU-based scattered data visualization, there is currently no existing research on visualizing scattered data by CUDA GPU. However, the NVidia CUDA SDK provides GPU-accelerated data expansions for the Marching Cubes algorithm [9]. These tools enabled the development of GPU based scattered data visualization. 2

1.3 Outline of the Thesis This report consists of detailed work explained through various chapters. Chapter I includes information regarding the motivation of the project. Chapter II presents the background on the technology and some of the basic theories on scattered data modeling and rendering. Chapter III discusses the design and implementation of this scattered data visualization system. The design and implementation regarding GPU-based Shepard s method modeling and GPU-based Marching Cubes algorithm rendering are explained in detail and depicted through diagrams. Chapter IV discusses case studies, in which a stepby-step procedure is explained pictorially. Time consumed, memories used, and speeds of data communications are presented in order to make comparisons among different cases. Chapter V discusses the summary of the work and how the system can be utilized. Possible modifications as well as some future work are also explained. 3

CHAPTER II BACKGROUND 2.1 Scattered Data Modeling and Visualization Scattered data is unevenly distributed or randomly spread over the volume of interest. The random distribution of the data makes it hard to visualize (since existing visualization algorithms are based on a 3D grid structure [3]). Scattered data is commonly found in engineering applications. Quick interactive visualization of scattered data is in great demand [2]. The most commonly used approach for scattered data visualization contains two steps [1]. The first step involves converting the scattered sample data into a 3D uniform grid. The sample data consist of 3 values for the position and one data value. To form the grid we need to interpolate the data values onto each grid node. After the interpolation we could use grid-based visualization techniques as Marching Cubes to visualize the grid such. 2.2 Data Interpolation Method Following the two-steps approach, the first step of the procedure is modeling scattered data into a 3D uniform grid. To model the scattered data, we employ commonly used interpolation methods. Interpolation is a method of constructing new data points within the range of a discrete set of the known original data points. In the areas of 4

engineering and science, when we deal with data, one often has a number of data points obtained through sampling or experimentation. The data represents function for a limited independent variable [14]. To analyze the data scientists and engineers usually use mathematical interpolation methods to model the scattered data. Generally speaking there are two kinds of mathematical interpolation, global interpolation and local interpolation. All the points will be used to determine the value of separate new points in global interpolation, while only the nearby points are used in local interpolation. Usually we use global interpolation methods when modeling scattered data to make full use of the original data. In global interpolation, given a set of n sample points {(xi,yi,zi), i = 1, 2,, n} with the sample value for each point {vi, i = 1, 2,, n}, we construct a function f(x,y,z) that is valid everywhere inside the domain of interest and satisfies the condition of {f(xi,yi,zi) = vi, i = 1, 2,, n}. When the function is found, it can help to calculate the value for each point based on the discrete location. For this project we used the Shepard s method. The mathematical expression of the method is: ( ) ( ) ( ) (2.1) is the distance between sample point and this gird node. is the diagonal length of a grid node. α is usually any real number greater than zero. The inverse-distance weighted method is a special case of the Shepard s method where α = 1. 5

2.3 Data Visualization Method Marching cubes is a computer graphics or visualization algorithm for extracting a polygonal mesh of an iso-surface from a 3D scalar field (sometimes called voxels) [9]. This is done by creating an index to a pre-calculated array of 256 possible polygon configurations ( =256) within the cube (by treating each of the 8 scalar values as a bit in an 8-bit integer). If the scalar's value is higher than the iso-value (i.e., it is inside the surface) then the appropriate bit is set to one, while if it is lower (outside), it is set to zero. The final value after all 8 scalars are checked is the actual index to the polygon indices array. Finally each vertex will generated triangles by using the Marching Cubes lookup table. So they can connect vertex correctly.[15]. 6

CHAPTER III DESIGN 3.1 Localized Data Modeling To model scattered data, we usually employ interpolation methods. These are the methods used to construct new data points within the range of a discrete set of known original data points. Currently, the most commonly used mathematical interpolation methods have two kinds of data modeling, globalized data modeling and localized data modeling. Globalized data modeling methods use all sample points to interpolate a grid value. Localized data modeling only use nearby sample points to interpolate a grid value. In our project, we focus on localized data modeling. As we previously stated, that localized data modeling only use nearby sample point to interpolate a grid value. How to define nearby is an interesting question. So we have included two options, Range- Oriented Localized Data Modeling (ROLDM) and Blocked-Oriented Localized Data Modeling (BOLDM). Range-Oriented Localized Data Modeling (ROLDM) is a distance-based localized data modeling method. Each time we interpolate a grid value, we draw a circle (in 2D) or sphere (in 3D) using this grid value as the center point and a certain radius. The radius is defined by us. If the radius is large enough to contain all sample point, the result will be 7

the same as the result of globalized data modeling. We only use the sample points in this circle or sphere to calculate the grid value and ignore any sample points outside this circle or sphere. Blocked-Oriented Localized Data Modeling (BOLDM) is designed for a GPU s architecture. We divide the entire data volume into small blocks or volumes which could fit perfectly in shared memory. We will discuss this later in the design of a GPU-based modeling algorithm. All the grid points have their own block ID. All the grid points values are interpolated by using the sample points, which have the same block ID. This means that only same block sample points are used to interpolate a grid value. 3.2 Design of GPU-based Modeling Algorithm Calculating the data values in parallel is the basic idea behind designing a GPUbased modeling algorithm. As we discussed, there are two types of localized data modeling methods (which we designed), Range-Oriented Localized Data Modeling (ROLDM) and Blocked-Oriented Localized Data Modeling (BOLDM). The design of Range-Oriented Localized Data Modeling (ROLDM) can be explained as follows: 1) Define the grid size in each dimension. 2) Allocate one dimension arrays for sample points and grid points in the CPU. 3) Read sample points from a text file and write positions and data values into the arrays. 4) Scale the sample point position values by the following steps: 8

a) Find the maximum and minimum of each x, y and z values of the sample points. b) For each x, y and z values of the sample points divide by the difference of the maximum and minimum and multiply by the grid size in each dimension. c) Write the scaled sample points positions in to an array 5) Allocate one dimension arrays for sample points and grid points in the GPU. 6) Calculate the block dimension and grid dimension by the grid size. 7) Copy the scaled sample point positions and values from the CPU to the GPU. 8) Call the kernel function by passing the block dimension, grid dimension, grid array pointer and sample data pointer. 9) Allocate shared memory. 10) Each kernel performs the following steps: a) Load this kernel s corresponding sample point data from global memory to shared memory. b) Synchronize all the threads. Wait until all the sample data are loaded into shared memory. c) Calculate this kernel s corresponding grid point index by using the block index, block dimension, and thread index. d) Calculate the distances between this kernel s corresponding grid point and all the sample data points. e) Ignore the sample data points far away from this kernel s corresponding grid point and record the nearby sample points. 9

f) Interpolate this kernel s corresponding grid point value by using recorded nearby sample points. g) Write the interpolated value into the array. 11) Copy back the interpolated grid values from the GPU to the CPU. 12) Free GPU memories. The design of Block-Oriented Localized Data Modeling (BOLDM) can be explained as follows: 1) Define the grid size in each dimension. 2) Allocate one dimension arrays for sample points and grid points in the CPU. 3) Read sample points from a text file and write positions and data values into the arrays. 4) Scale the sample point position values by the following steps: a) Find the maximum and minimum of each x, y and z value of the sample points. b) For each x, y and z value of the sample points divide by the difference of the maximum and minimum and multiply by the grid size in each dimension. c) Write the scaled sample point positions into an array 5) Allocate one dimension arrays for sample points and grid points in the GPU. 6) Calculate the block dimension and grid dimension by the grid size. 7) Divide the sample data points into blocks by using the grid dimension. 8) Copy the scaled sample point positions and values from the CPU to the GPU. 9) Call the kennel function by passing the block dimension, grid dimension, grid array pointer and sample data pointer. 10) Allocate shared memory. 10

11) Each kernel performs the following steps: a) Load this kernel s corresponding block of sample point data from global memory to shared memory. b) Synchronize all the threads. Wait until all the sample data are loaded into shared memory. c) Calculate this kernel s corresponding grid point index by using the block index, block dimension, and thread index. d) Interpolate this kernel s corresponding grid point value by using this kernel s corresponding block sample data. e) Write the interpolated value into the array. 12) Copy back the interpolated grid values from the GPU to the CPU. 13) Free GPU memories. 3.3 Design of GPU-based Visualization Algorithm Marching Cubes is a surface reconstruction algorithm [8]. It extracts a geometric iso-surface from the volume of voxels. There are three situations for the vertices of a voxel: 1) If the value of the vertex is less than the iso-value this vertex is outside of the isosurface. 2) If the value of the vertex equals the iso-value which means this vertex is on the iso-surface. 3) If the value of the vertex is larger than the iso-value this vertex is in the isosurface. 11

A border voxel does not have all of its vertices neither inside nor outside the isosurface. By ignoring the second situation (where the value of the vertex equals the isovalue), there are 256 possible configurations for each voxel. Each voxel has 8 vertices and each vertex has 2 situations either inside the iso-surface or outside the iso-surface. This is why there are many configurations. So we predefined the triangle mesh approximating the part of the iso-surface for each configuration [9]. We use the edgetable[256] array to store 256 possible configurations as a look up table. For each of the possible vertex states listed in edgetable[256] there is a specific triangulation. tritable[256] lists all of them in the form of 0-5 edge triples so there are 256 ways to draw a triangle. In a 3D space we enumerate 256 different situations for the marching cubes representation. All of these cases can be generalized in 15 unique topological cases [7]. The main idea of the GPU-base marching cubes algorithm is that each thread of the GPU can be used to compute each voxel of the entire volume. The design of the GPUbased marching cubes algorithm can be explained as follows: 1) Initialization. 2) Allocate one dimension arrays for the grid data values, the voxel cases of the entire volume, the edge look up table and the triangle look up table on the CPU. 3) Read grid data values which we interpolated by ROLDM or BOLDM and write them into CPU arrays. 4) Allocate one dimension arrays for the grid data values, the voxel cases of the entire volume, the edge look up table and the triangle look up table on the GPU. 5) Copy the grid data values array from the CPU to the GPU. 12

6) Calculate the block dimension and grid dimension by the volume size. 7) Call the kernel function by passing the block dimension, the grid dimension, the iso-value and grid data values array pointer. 8) Each kernel performs the following steps: a) Calculate this kernel s corresponding voxel index by using the block index, the block dimension, and the thread index. b) Read the vertex values of this kernel s corresponding voxel from grid data values array. c) Compare each vertex value and iso-value to generate 8 scalar values which indicate 8 vertices of this voxel. We treat each of the 8 scalar values as a bit in an 8-bit integer what inside is the 0 and outside is the 1. d) Write the result of the comparison which is a 8-bit integer into the voxel cases array. 9) Copy back the voxel cases array from the GPU to the CPU. 10) Draw triangles by using the voxel cases array, edge look up table and triangle look up table. 13

CHAPTER IV IMPLEMENTATION 4.1 Implementation of Localization Shepard s Method on GPU The Shepard s Method is represented as: ( ) ( ) ( ) (2.1) is the distance between sample point and this gird node. is the diagonal length of a grid node. α is usually any real number greater than zero. The inverse-distance weighted method is a special case of the Shepard s method where α = 1. Each grid point has its own position which is represented by x, y and z. Each x, y and z value is used as an index to correspond the kernel index which is the thread index (in order to sign each grid point a kernel thread). Thus, the program can be parallelized so that each thread calculates a data value for each of the given set of grid points. All the grid point positions and values will be stored in a one dimension array since the GPU and the CPU are using one dimension arrays to communicate. The index of the one dimension array will indicate the position of a grid point: Index = z*ydimensionsize + y*xdimensionsize + x (4.1) 14

Each thread can calculate its corresponding grid point position by using threadidx.x, blockdim.x and blockidx.x. The equation is defined as: Index = threadidx.x + blockdim.x * blockidx.x (4.2) Since every kernel thread has been assigned to each grid point, all the grid points are interpolated simultaneously by using the Shepard s method equation. All sample points will be loaded into shared memory when each kernel thread begins Range-Oriented Localized Data Modeling (ROLDM). The distances between the current kernels thread grid point and all sample points are calculated to determine whether this sample point is nearby or not. The distance equation is defined as: Distance = ( ) ( ) ( ) (4.3) Pseudo code: Input: sample points array Output: 3D uniform grid with data value on each grid node Load sample points into shared memory Synchronize all the threads Calculate the index by using kernel thread index equation (4.2) Parse x, y and z value form to the index Calculate the distance between the current grid point and all sample points Record nearby sample points whose distance is less than a certain value 15

Interpolate the current grid point value by using the Shepard s method Equation. Only recorded samples are used to interpolate. Write back the grid point value into an array 4.2 Implementation of Marching Cubes Algorithm on GPU Three kernel functions are implemented for the GPU-based Marching Cubes Algorithm. They are the classifyvoxel kernel, the compactvoxels kernel and the generatetriangles kernel. Each kernel thread is assigned a voxel and then execute classifyvoxel to determine whether this voxel will be displayed or not, which means there is an intersection on the edge of this voxel or not. We compare the iso-value and the value of each vertex to determine whether there is an intersection on the edge or not. If all of the verte value of the voxel are less than or greater than iso-value, this voxel will not be displayed. If some of the vertices are less than the iso-value and other vertices are greater than the iso-value, there is an intersection on the edge and this voxel will be displayed. After the classifyvoxel kernel function, the voxeloccupied array will be outputted. This will indicate if the voxel is non-empty or will be displayed. The voxelvertices array is used to record vertex statues in order to tell the generatetriangles kernel function how to display triangles. 16

We executed compactvoxels right after classifyvoxel to compact the voxeloccupied array and get rid of empty voxels. This allows us to run the complex generatetriangles kernel on only the occupied voxels. The generatetriangles kernel function runs only on the occupied voxels for the high performance. Both of the lookup tables edgetable and tritable will be loaded into the GPU texture memory. Each kernel calculates its corresponding voxel case from the voxelvertices array. After the voxel cases are loaded, each kernel will go through both of the lookup tables to find how to generate a triangle for this voxel case. Thus triangles will be generated correctly. 17

CHAPTER V RESULTS AND ANALYSES 5.1 Global Method Comparisons between CPU and GPU. The implementation of the presented algorithm has been tested on a Dell computer with an NVidia GeForce GT 525M. The following are its specifications: CUDA Driver Version / Runtime Version: 5.5/5.5 CUDA Capability Major/Minor version number: 2.1 Number of Multiprocessors: 2 Number of CUDA cores per Multiprocessors: 48 Total Number of CUDA Cores: 96 Total amount of global memory: 1024 Mbytes Total amount of shared memory per block: 49152 bytes Various grid sizes have been chosen in order to compare the time consumed and draw conclusions regarding how much the GPU-based program will speed computation. Codes are written using similar logic for both CPU based sequential programming and GPU based parallel programming. Table 5.1 shows average the running times in milliseconds (ms) of ten experiments result for each grid. 18

Table 5.1 Comparing CPU and GPU for Global Shepard s Method of Various Grid Sizes Runtime Grid Size CPU Runtime GPU Runtime SpeedUp Efficiency Factor 1*1*1 0.003 50.2 0.00018 <0.0001 2*2*2 0.030 51.2 0.00080 <0.0001 4*4*4 0.253 51.2 0.00494 <0.0001 8*8*8 1.931 54.2 0.03562 0.0004 16*16*16 17.341 55.9 0.31021 0.0032 32*32*32 140.123 65.804 2.12765 0.0221 64*64*64 1014.421 115.702 8.76768 0.0914 128*128*128 13508.452 485.898 27.81003 0.2552 The speedup factor is the ratio between GPU runtime and CPU runtime. It measures and captures the relative benefit of using parallel. The speedup factor equation is defined as: SpeedUp Factor = = = S(p) (5.1) Efficiency is a fraction time measurement for how a processing element is usefully employed in a computation [11]. The Efficiency equation is defined as: Efficiency = = = ( ) (5.2) The GPU-based program has overhead factors such as process synchronization, memory allocation and data communication. The ratio of overhead factors becomes smaller when the grid size increases. Thus, we can see that the GPU-based program becomes increasingly efficient as the grid size increase. 19

The running time of the CPU exceeds that of the GPU when the grid size is larger than 21*21*21. The GPU-based global method shows better results as the size of grid increases. When the grid size is smaller than 21*21*21, the CPU-based global method taken advantage due to the GPU-based global method having the overhead of memory copy and synchronization. The GPU-based global method can take great advantage of parallel processing when the grid size is larger than 21*21*21. The serial compute time is significantly longer than the GPU communication time. The details of the overhead of GPU-based global method data communication have also been observed. Table 5.1 average shows the running times in milliseconds (ms) of ten experiments in each size. Table 5.2 GPU Global Method Detailed Runtime Grid Size GPU Runtime GPU Kernel Compute Runtime Data Copy Host to Device Time Data Copy Device to Host Time Malloc Memory Time Data Communication Time 1*1*1 50.2 0.002 0.672 0.002 50 51 2*2*2 51.2 0.002 0.672 0.002 51 52 4*4*4 51.2 0.002 0.672 0.002 51 52 8*8*8 54.2 0.002 0.640 0.002 54 55 16*16*16 55.9 0.009 0.704 0.004 55 57 32*32*32 65.804 6.804 0.704 0.021 59 60 64*64*64 115.702 53.702 0.672 0.160 62 65 128*128*128 485.898 428.898 0.672 2.100 57 60 Note: Data Communication Time is the sum of Data Copy Host to Device Time, Data Copy Device to Host Time and Malloc Memory Time. The NVIDIA Visual Profiler is a cross-platform performance profiling tool that delivers developers with vital feedback for optimizing CUDA C/C++ applications [10]. 20

We applied the NVIDIA Visual Profiler as a timing test tool. The Data Communication Time of GPU-based Global Method is shown in detail, in Table 5.2 which is broken into four parts, GPU Kernel Compute Runtime, Data Copy Host to Device Time, Data Copy Device to Host Time and Malloc Memory Runtime. The smallest time unit of the NVIDIA Visual Profiler timing test tool is 0.002ms, so all the time less than or equals to 0.002ms will be shown as 0.002ms in this table. The GPU Kernel Compute Runtime is the time of all kernel computations from beginning to end. These are meaningless when the grid size is smaller than 16*16*16 because GPU these are too small to monitor by the NVIDIA Visual Profile. The GPU Kernel Compute Runtime increases approximately 8 times on each grid dimension when the grid size increase 32*32*32 to 128*128*128. Data Copy Host to Device Time has approximately the same results. Due to the same sample data are being used for each experiment. The Data Copy Device to Host Time is meaningless when the grid size is less than 16*16*16. This is due to NVIDIA Visual Profile smallest time unit being the same as that of GPU Kernel Compute Runtime. Data Copy Device to Host Time increases with the larger output data size which is the grid size. Malloc Memory Time also grows with the needed memory data size. The speeds of global memory copy are also experimented and shown in Table 5.3. 21

Table 5.3 Data Communication Time, Size and Speed for Various Grid Sizes Grid Size Data Copy Host to Device Data Size Data Copy Host to Device Time Data Copy Host to Device Speed Data Copy Device to Host data Size Data Copy Device to Host Time Data Copy Device to Host Speed 1*1*1 2156KB 0.672 3.06GB/s 4bytes 0.002 1.66MB/s 2*2*2 2156KB 0.672 3.06GB/s 32 bytes 0.002 14.67MB/s 4*4*4 2156KB 0.672 3.06GB/s 256 bytes 0.002 117.38MB/s 8*8*8 2156KB 0.640 3.21GB/s 2 KB 0.002 847.71MB/s 16*16*16 2156KB 0.704 2.92GB/s 16KB 0.004 3.29GB/s 32*32*32 2156KB 0.704 2.92GB/s 128KB 0.021 5.56GB/s 64*64*64 2156KB 0.672 3.06GB/s 1MB 0.160 6.09GB/s 128*128*128 2156KB 0.672 3.06GB/s 8MB 2.100 2.82GB/s The Data Copy Host to Device Speed are all approximately the same since the hardware does not change. The largest Data Copy Device to Host Speed is observed within the grid size 64*64*64. 22

Table 5.4 Data Communication Size and Speed for Grid Size 64*64*64 to 128*128*128 Grid Size Data Copy Device to Host data size Data Copy Device to Host speed 64*64*64 1MB 6.09GB/s 70*70*70 1.308MB 5.78GB/s 80*80*80 1.953MB 5.65GB/s 90*90*90 2.781MB 2.81GB/s 100*100*100 3.815MB 2.95GB/s 110*110*110 5.077MB 2.82GB/s 120*120*120 6.592MB 2.91GB/s 128*128*128 8MB 2.82GB/s Table 5.5 Data Communication Size and Speed for Grid Size 80*80*80 to 82*82*82 Grid Size Data Copy Device to Host data size Data Copy Device to Host speed 80*80*80 1.953MB 5.65GB/s 81*81*81 2.027MB 2.65GB/s 82*82*82 2.103MB 2.68GB/s Table 5.4 and Table 5.5 show us that the peak Data Copy Device to Host speed is observed at grid size 80*80*80. 23

5.2 Accuracy Comparisons between Localized Global Method and Non-localized Global Method The two-step approach to scattered data visualization faces many issues, one of which is accuracy. We employed numerical error analysis to calculate the accuracy of the scattered data modeling. After the data modeling step, every grid node value was then constructed using the input sample points. Then, these interpolated grid node values were used to produce the data values for the original sample points (by using linear interpolation). Analytically, these interpolated grid node values can exactly reproduce the original data values at the sample points; but numerically, it cannot due to numerical errors. Such numerical errors can be calculated by: n i = f ( x, y, z ) - v, i i i i i =1,..., n, (5.4) where is the scattered data value at sample point (,, ) and f(,, ) is the interpolated grid node values at the point [12]. The relative errors are calculated using the numerical errors and original data values. Root mean square measures the differences between value predicted by a model or an estimator and the values actually observed. We are using the absolute value of relative error to calculate RMS, which can be explained as: Relative Error = (5.5) 24

RMS use the sample standard deviation to show the experiment accuracy. The equation for RMS is explained as: RMS = ( ) (5.5) 45000 40000 35000 30000 25000 20000 15000 Real Value Experimental Value 10000 5000 0 1 11 21 31 41 51 61 71 81 91 101 111 121 131 Figure 5.1 Graphical Representation of Non-Localized Global Method Data Modeling Result 25

40000 35000 30000 25000 20000 15000 10000 5000 0 Numerical Error 1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 Numerical Error Figure 5.2 Graphical Representation of Non-localized Global Method Data Modeling Numerical Error 3500 Relative Error 3000 2500 2000 1500 Relative Error 1000 500 0 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 Figure 5.3 Graphical Representation of Non-localized Global Method Data Modeling Relative Error 26

1 11 21 31 41 51 61 71 81 91 101 111 121 131 We can see that the results of non-localized global method data modeling have a seemingly low accuracy. The RMS is 534.496 which are not desirable. We can barely see the trend of data point values as they increase and decrease. Apparently, the result is not satisfied. In order to improve accuracy, we employ localized global method data modeling which uses only nearby sample points interpolated a grid value. We use 8 by 8 by 8 as the range size. The results of localized global method data modeling are shown in Figure 5.4. 45000 40000 35000 30000 25000 20000 15000 Real Value Experimental Value 10000 5000 0 Figure 5.4 Graphical Representation of Localized Global Method Data Modeling Result 27

1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 Numerical Error 20000 18000 16000 14000 12000 10000 8000 6000 4000 2000 0 Numerical Error Figure 5.5 Graphical Representation of Localized Global Method Data Modeling Numerical Error 9000 8000 7000 6000 5000 Relative Error 4000 3000 2000 1000 0 Relative Error Figure 5.6 Graphical Representation of Localized Global Method Data Modeling Relative Error 28

We can see that the results of localized global method data modeling are much more accurate than the results of non-localized global method data modeling. The accuracy has improved significantly. The RMS is reduced to 241.193. Decreasing the contribution of any far away data point is another approach to improving accuracy. We can control the contribution of far distance data point by changing α. The distant data point contributes to the result more when α is small. So, we can increase α value in order to decrease the contribution of far away data point. By default, α is 1, so we increase α to 2. The result is shown by Figure 5.7. 45000 40000 35000 30000 25000 20000 15000 Real Value Experimental Value 10000 5000 0 1 11 21 31 41 51 61 71 81 91 101 111 121 131 Figure 5.7 Graphical Representation of Improved Shepard s Method by α = 2 Result 29

Numerical Error 20000 18000 16000 14000 12000 10000 8000 6000 4000 2000 0 1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 Numerical Error Figure 5.8 Graphical Representation of Improved Shepard s Method by α = 2 Numerical Error 30

1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 Relative Error 5000 4500 4000 3500 3000 2500 2000 1500 1000 500 0 Relative Error Figure 5.9 Graphical Representation of Improved Shepard s Method by α = 2 Relative Error By changing α from 1 to 2, the accuracy has improved slightly. The RMS is reduced to 230.666 (from 241.193). We continue to increase the α value to the accuracy peak, which is 10 in this case. The result is shown by Figure 5.10. 31

1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 1 11 21 31 41 51 61 71 81 91 101 111 121 131 141 45000 40000 35000 30000 25000 20000 15000 Real Value Experimental Value 10000 5000 0 Figure 5.10 Graphical Representation of Improved Shepard s Method by α = 10 Result 6000 Numerical Error 5000 4000 3000 2000 Numerical Error 1000 0 Figure 5.11 Graphical Representation of Improved Shepard s Method by α = 10 Numerical Error 32

3000 Relative Error 2500 2000 1500 1000 Relative Error 500 0 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 Figure 5.12 Graphical Representation of Improved Shepard s Method by α = 10 Relative Error The accuracy has significantly improved and the result is desirable now. The RMS is reduced to 40.057. When the value of the α parameter increase to 11, The accuracy goes down sharply and is shown in Figure 5.13. 33

45000 40000 35000 30000 25000 20000 15000 Real Value Experimental Value 10000 5000 0 1 11 21 31 41 51 61 71 81 91 101 111 121 131 Figure 5.13 Improved Shepard s Method by α = 11 Relative Error The accuracy drop down sharply when α increase to 11, due to the contribution of distance becomes too small. The RMS is increased to 431.324. The over all RMS changes are shown in Figure 5.14 34

RMS 600 500 400 300 200 100 0 RMSE Non-localized Global Method Localized Global Method α=2 α=10 α=11 Figure 5.14 RMS of Different Data Modeling Methods 5.3 Performance Comparison between Static Local Block Data Modeling Method and Dynamic Local Block Data Modeling Method Blocked-oriented localized data modeling (BOLDM) is designed for GPU s Architecture. We divided the entire data volume into small blocks (or volumes) which could be fitted perfectly into shared memory. The static local block data modeling method is a pre-defined method. We manually divide the entire data volume into small blocks and assign each small block to a GPU block, so each small block has its own shared memory. The dynamic local block data modeling method also divides the entire data volume into small blocks but dynamically. Each data point has its own block which is the group of data points around it. The blocks are organized dynamically to each data point, which is determined by its position. The dynamic local block data modeling method also uses the shared memory to improve the performance. However, it is different than the static local block data modeling method which puts all the required sample data 35

points in their own shared memory. Some of the required sample data points may not be in their own shared memory due to shared memory being static and the block is moved by data point position. So, static local block data modeling method reads all its required sample data from its own shared memory. However, the dynamic local block data modeling method reads some of the required sample data from its own shared memory and the other sample data from global memory. Obviously, the performance of the static local block data modeling method and the dynamic local block data modeling method will be seemingly different. They are shown in Table 5.6. 36

Table 5.6 Comparing Static Local Block Data Modeling and Dynamic Local Block Data Modeling Method Running Time for Various Grid Sizes Grid Size Static Local Block Data Modeling Method Runtime Dynamic Local Block Data Modeling Method Runtime 20*20*20 39.724 75.963 30*30*30 41.823 93.517 40*40*40 48.132 99.341 50*50*50 73.213 108.324 60*60*60 91.772 116.648 70*70*70 130.341 194.425 80*80*80 208.321 380.175 90*90*90 250.324 530.934 100*100*100 367.512 714.532 110*110*110 460.425 898.353 120*120*120 586.339 1059.693 37

1200 1000 800 600 400 200 0 20*20*20 30*30*30 40*40*40 50*50*50 60*60*60 70*70*70 80*80*80 90*90*90 100*100*100 110*110*110 120*120*120 Static Local Block Data Modeling Method runtime Dynamic Local Block Data Modeling Method runtime Figure 5.14 Graph to Compare the Speed between Static Local Block Data Modeling and Dynamic Local Block Data Modeling Method We can see that the dynamic local block data modeling method consumes more time than the static local block data modeling method runtime. This is due to dynamic local block data modeling method reading from both shared memory and global memory. As the grid size increases, the running time of the dynamic local block data modeling method increases significantly. 5.4 Over All Speed Up and Error Report Analyses Different GPU-based data modeling methods present different error reports and speed up rates. How to balance speed up rate and accuracy will be a new issue of GPUbased scattered data visualization. Table 5.7 shows the error reports and speedup factors of different data modeling methods by grid size 128*128*128. 38

Table 5.7 Speed Up Rate and Error Report for Various Data Modeling Method CPUbased Global Method GPUbased Global Method GPUbased Global Method with α=2 GPUbased Global Method with α=10 GPUbased Static Local Block Method GPUbased Dynamic Local Block Method Maximum 34538 34538 18428 5607 18073 12456 Absolute Numerical Error Maximum 3064 3064 4311 2803 7828 5474 Absolute Relative Error RMS 3174.605 3174.605 1273.798 215.133 1477.340 693.412 Accuracy of Numerical Error RMS 534.496 534.496 230.666 40.057 241.193 141.857 Accuracy of Relative Error SpeedUp 1 27.110 26.536 26.325 22.656 12.617 Factor Efficiency 1 0.2552 0.2764 0.2742 0.2360 0.1314 Table 5.7 shows that we can employ the GPU-based dynamic local block method to increase accuracy significantly by sacrificing running time. The key element is finding a way to balance accuracy and runtime. Increasing the α coefficient of Shepard s method is good option to improve result accuracy. 39

CHAPTER VI CONCLUSION AND FUTURE WORK How to balance performance and accuracy is an important issue in GPU-based scattered data visualization. We have built a GPU-accelerated scattered data visualization system and used the system to study various methods of speeding up the performance while still balancing the accuracy. We have experimented with various techniques to improve GPU memory usage and to reduce CPU-GPU data communication. The experiments have shown the following results: 1) GPU-based scattered data modeling demonstrates a speedup of 12 to 27 times than its CPU-based counterpart (on an NVidia GeForce GT 525M GPU against an Intel Core i5-2410m 2.30 GHz CPU). 2) Increasing the value of the α parameter to certain value in the Shepard s method can improve accuracy without causing a performance penalty (The accuracy of chemical leakage sample data will increase to the peak when the value of the α parameter is 10). 3) Localization can reduce modeling error but causes a performance penalty. 4) Dynamic block localization can increase accuracy significantly, but has a large performance penalty due to the frequent data shift among GPU memory banks. 40

5) Static block localization has a smaller performance penalty, but also shows smaller accuracy improvement. The parallel efficiency of the system is low (0.1314 to 0.2763). To achieve high memory bandwidth for concurrent accesses, issues related to GPU memory bank conflicts need to be addressed in future work. More GPU-accelerated data interpolation methods, such as the volume spline method, the thin-plate-spline method and the multiquadric method, should also be investigated in the future. 41

REFERENCES [1] Yingcai Xiao, J. Ziebarth, Physically Based Data Modeling for Sparse Data Volume Visualization, Technical Report No. 98-02, Department of Mathematics and Computer Science, University of Akron, January 1998. [2] Yingcai Xiao, J, Ziebarth, FEM-based Scattered Data Modeling and Visualization, with J. Ziebarth, Computers and Graphics, Vol. 24, No. 5, 2000, 775-789. [3] Yingcai Xiao, C. Woodbury Constraining Global Interpolation Methods for Sparse Data Volume Visualization, with C. Woodbury, International Journal of Computers and Applications, Vol. 21, No. 2, 1999, 56-64. [4] Yingcai Xiao, John P. Ziebarth, Chuck Woodbury, Eric Bayer, Bruce Rundell, Jeroen van der Zijp, The Challenges of Visualizing and Modeling Environmental Data, with J. Ziebarth, C. Woodbury, E. Bayer, B. Rundell, J. Zijp, IEEE Visualization 96 Conference Proceeding, San Francisco, California, October 27 November 1, 1996, 413-416. [5] Vinjarapu, Saranya S. GPU-based Scattered Data Modeling, Master Thesis in Computer Science, University of Akron, 2012. [6] J.Allard, C.Menier, B.Raffin, et al. Grimage: Markerless 3D Interactions, In ACM SIGGRAPH 07, International Conference on Computer Graphics and Interactive Techniques, emerging technologies, article No. 9, 2007. [7] C.Leong, Y.Xing, N.D.Georganas. Tele-Immersive Systems, IEEE International Workshop on Haptic Audio Visual Environments and their Applications, Ottawa: Canada, 2008. [8] W. E. Lorensen, H. E. Cline, Marching Cubes: A High Resolution 3D Surface Reconstruction Algorithm, SIGGRAPH 87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques, 1987, USA [9] Y. Heng, L. Gu, GPU-based Volume Rendering for Medical Image Visualization, Proceedings of the IEEE Engineering in Medicine and Biology 27th Annual Conference, 2005, Shanghai, China [10] NVIDIA CUDA C Programming guide. Version 3.2, 2010, NVIDIA C[4] H.R. Nagel, GPU optimized Marching Cubes Algorithm for Handling Very Large, Temporal 42

Datasets, CiteSeerX Scientific Literature Digital Library and Search Engine, 2010 corporation [11] Programming Massively Parallel Processors: A Hands-on Approach. By David Kirk and Wen-mei Hwu. [12] Yingcai Xiao, Jinqiang Tian, Hao Sun. Error Analysis in Sparse Data Volume Visualization, International Conference on Imaging Science, Systems, and Technology, Las Vegas, June 24-27, 2002, 813-818. [13] Lu Wang. Scattered-Data Computing on Various Platforms, Master Thesis in Computer Science, University of Akron, 2014. [14] Interpolation. (2015, March 12). In Wikipedia, The Free Encyclopedia. Retrieved 19:34, April 2, 2015, from http://en.wikipedia.org/w/index.php?title=interpolation&oldid=651013303 [15] Marching cubes. (2015, February 9). In Wikipedia, The Free Encyclopedia. Retrieved 19:40, April 2, 2015, from http://en.wikipedia.org/w/index.php?title=marching_cubes&oldid=646319885 43