Parallel Image Processing with CUDA A case study with the Canny Edge Detection Filter
|
|
|
- Rudolf Farmer
- 10 years ago
- Views:
Transcription
1 Parallel Image Processing with CUDA A case study with the Canny Edge Detection Filter Daniel Weingaertner Informatics Department Federal University of Paraná - Brazil Hochschule Regensburg Daniel Weingaertner (DInf-UFPR) FH-Regensburg 1 / 40
2 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 2 / 40
3 Paraná Brazil Daniel Weingaertner (DInf-UFPR) FH-Regensburg 3 / 40
4 Brazil Europe Daniel Weingaertner (DInf-UFPR) FH-Regensburg 4 / 40
5 Paraná Daniel Weingaertner (DInf-UFPR) FH-Regensburg 5 / 40
6 Curitiba Daniel Weingaertner (DInf-UFPR) FH-Regensburg 6 / 40
7 Federal University of Paraná Daniel Weingaertner (DInf-UFPR) FH-Regensburg 7 / 40
8 Informatics Department Undergraduate: Bachelor in Computer Science 8 semesters course 80 incoming students per year Bachelor in Biomedical Informatics 8 semesters course 30 incoming students per year Graduate: Master and PhD in Computer Science Algorithms, Image Processing, Computer Vision, Artificial Intelligence Databases, Scientific Computing and Open Source Software, Computer-Human Interface Computer Networks, Embedded Systems Daniel Weingaertner (DInf-UFPR) FH-Regensburg 8 / 40
9 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 9 / 40
10 Insight Toolkit (ITK) Created in 1999, Open Source, Multi platform, Object Oriented (Templates), Good documentation and support Figure: Image Processing Workflow in ITK Daniel Weingaertner (DInf-UFPR) FH-Regensburg 10 / 40
11 ITK - Sample code 1 #i n c l u d e itkimage. h 2 #i n c l u d e itkimagefilereader. h 3 #i n c l u d e i t k I m a g e F i l e W r i t e r. h 4 #i n c l u d e itkcannyedgedetectionimagefilter. h 5 6 typedef i t k : : Image<float,2> ImageType ; 7 t y p e d e f i t k : : ImageFileReader< ImageType > ReaderType ; 8 t y p e d e f i t k : : ImageFileWriter< ImageType > WriterType ; 9 t y p e d e f i t k : : CannyEdgeDetectionImageFilter< ImageType, ImageType > CannyFilter ; 0 1 i n t main ( i n t argc, char a r g v ){ 2 3 ReaderType : : Pointer reader = ReaderType : : New ( ) ; 4 reader >SetFileName ( argv [ 1 ] ) ; 5 reader >Update ( ) ; 6 7 CannyFilter : : Pointer canny = CannyFilter : : New ( ) ; 8 canny >S e t I n p u t ( r e a d e r >GetOutput ( ) ) ; 9 canny >SetVariance ( atof ( argv [ 3 ] ) ) ; 0 canny >S e t U p p e r T h r e s h o l d ( a t o i ( a r g v [ 4 ] ) ) ; 1 canny >S e t L o w e r T h r e s h o l d ( a t o i ( a r g v [ 5 ] ) ) ; 2 canny >Update ( ) ; 3 4 WriterType : : P o i n t e r w r i t e r = WriterType : : New ( ) ; 5 writer >SetFileName ( argv [ 2 ] ) ; 6 w r i t e r >S e t I n p u t ( canny >GetOutput ( ) ) ; 7 writer >Update ( ) ; 8 9 r e t u r n EXIT SUCCESS ; 0 } Daniel Weingaertner (DInf-UFPR) FH-Regensburg 11 / 40
12 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 12 / 40
13 What is GPGPU Computing? The use of the GPU for general purpose computation CPU and GPU can be used concurrently To the end user, its simply a way to run applications faster. Daniel Weingaertner (DInf-UFPR) FH-Regensburg 13 / 40
14 What is CUDA? CUDA = Compute Unified Device Architecture. General-Purpose Parallel Computing Architecture. Provides libraries, C language extension and hardware driver. Daniel Weingaertner (DInf-UFPR) FH-Regensburg 14 / 40
15 Parallel Processing Models Daniel Weingaertner (DInf-UFPR) FH-Regensburg 15 / 40
16 Single-Instruction Multiple-Thread Unit Creates, handles, schedules and executes groups of 32 threads (warp). All threads in a warp start at the same point. But they are free to jump to different code positions independently. Daniel Weingaertner (DInf-UFPR) FH-Regensburg 16 / 40
17 CUDA Architecture Overview Daniel Weingaertner (DInf-UFPR) FH-Regensburg 17 / 40
18 Optimization Strategies for CUDA Main optimization strategies for CUDA involve: Optimized/careful memory access Maximization of processor utilization Maximization of non-serialized instructions Daniel Weingaertner (DInf-UFPR) FH-Regensburg 18 / 40
19 CUDA - Sample Code 1 #i n c l u d e <s t d i o. h> 2 #i n c l u d e <a s s e r t. h> 3 #i n c l u d e <cuda. h> 4 v o i d incrementarrayonhost ( f l o a t a, i n t N) 5 { 6 i n t i ; 7 f o r ( i =0; i < N; i ++) a [ i ] = a [ i ]+1. f ; 8 } 9 g l o b a l v o i d i n c r e m e n t A r r a y O n D e v i c e ( f l o a t a, i n t N) 0 { 1 i n t idx = blockidx. x blockdim. x + threadidx. x ; 2 i f ( idx<n) a [ i d x ] = a [ i d x ]+1. f ; 3 } 4 i n t main ( v o i d ) 5 { 6 f l o a t a h, b h ; // p o i n t e r s to h o s t memory 7 f l o a t a d ; // p o i n t e r to d e v i c e memory 8 i n t i, N = 10000; 9 s i z e t s i z e = N s i z e o f ( f l o a t ) ; 0 a h = ( f l o a t ) m a l l o c ( s i z e ) ; 1 b h = ( f l o a t ) m a l l o c ( s i z e ) ; 2 cudamalloc ( ( v o i d ) &a d, s i z e ) ; 3 f o r ( i =0; i<n; i ++) a h [ i ] = ( f l o a t ) i ; 4 cudamemcpy ( a d, a h, s i z e o f ( f l o a t ) N, cudamemcpyhosttodevice ) ; 5 incrementarrayonhost ( a h, N) ; 6 i n t b l o c k S i z e = ; 7 i n t nblocks = N/ blocksize + (N%blockSize == 0? 0 : 1 ) ; 8 incrementarrayondevice <<< nblocks, blocksize >>> ( a d, N) ; 9 cudamemcpy ( b h, a d, s i z e o f ( f l o a t ) N, cudamemcpydevicetohost ) ; 0 f r e e ( a h ) ; f r e e ( b h ) ; cudafree ( a d ) ; 1 } Daniel Weingaertner (DInf-UFPR) FH-Regensburg 19 / 40
20 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 20 / 40
21 Integrating CUDA Filters into ITK Workflow ITK community suggests: Careful! Re-implement filters where parallelizing provides significant speedup Consider the entire workflow: copying to/from the GPU is very time consuming Premature optimization is the root of all evil! (Donald Knuth) Daniel Weingaertner (DInf-UFPR) FH-Regensburg 21 / 40
22 Integrating CUDA Filters into ITK Workflow ITK community suggests: Careful! Re-implement filters where parallelizing provides significant speedup Consider the entire workflow: copying to/from the GPU is very time consuming Premature optimization is the root of all evil! (Donald Knuth) Daniel Weingaertner (DInf-UFPR) FH-Regensburg 21 / 40
23 CUDA Insight Toolkit (CITK) Changes to ITK Slight architecture change: CudaImportImageContainer Backwards compatible Data transfer between HOST and DEVICE only on demand Allows for filter chaining inside the DEVICE Daniel Weingaertner (DInf-UFPR) FH-Regensburg 22 / 40
24 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 23 / 40
25 CudaCanny itkcudacannyedgedetectionimagefilter Algorithm 1 Canny Edge Detection Filter Gaussian Smoothing Gradient Computation Non-Maximum Supression Histeresis Daniel Weingaertner (DInf-UFPR) FH-Regensburg 24 / 40
26 Gradient Computation with Sobel Filter itkcudasobeledgedetectionimagefilter (a) Sobel X (b) Sobel Y L v = L 2 x + L 2 y (1) ( ) Ly θ = arctan L x (2) Daniel Weingaertner (DInf-UFPR) FH-Regensburg 25 / 40
27 Optimization for Edge Direction Computation Daniel Weingaertner (DInf-UFPR) FH-Regensburg 26 / 40
28 Code Extract from CudaSobel Daniel Weingaertner (DInf-UFPR) FH-Regensburg 27 / 40
29 Histeresis Operation Daniel Weingaertner (DInf-UFPR) FH-Regensburg 28 / 40
30 Histeresis Algorithm Algorithm 2 Histeresis on CPU Transfers the Gradient/NMS images to the GPU repeat Run the histeresis kernel on GPU until no pixel changes status Return edge image Daniel Weingaertner (DInf-UFPR) FH-Regensburg 29 / 40
31 Histeresis Algorithm Algorithm 3 Histeresis on GPU Load an image region with size 18x18 into shared memory modified false repeat modified region false Synchronize threads of same multiprocessor if Pixel changes status then modified true modified region true end if Synchronize threads of same multiprocessor until modified region = false if modified = true then Update modified status on HOST end if Daniel Weingaertner (DInf-UFPR) FH-Regensburg 30 / 40
32 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 31 / 40
33 Metodology Hardware: Server: CPU: 4x AMD Opteron(tm) Processor ,4GHz with 8 cores, each with 512 KB cache and 126GB RAM GPU1: NVidia Tesla C2050 with 448 1,15GHz cores and 3GB RAM. GPU2: NVidia Tesla C1060 com 240 1,3GHz cores and 4GB RAM. Desktop: CPU: Intel R Core(TM)2 Duo E7400 2,80GHz with 3072 KB cache and 2GB RAM GPU: NVidia GeForce 8800 GT with 112 1,5GHz cores and 512MB RAM. Daniel Weingaertner (DInf-UFPR) FH-Regensburg 32 / 40
34 Metodology Images from the Berkeley Segmentation Dataset Base Image resolution Num. of Images B e B e B e B e Daniel Weingaertner (DInf-UFPR) FH-Regensburg 33 / 40
35 Performance Tests Daniel Weingaertner (DInf-UFPR) FH-Regensburg 34 / 40
36 Performance Tests Daniel Weingaertner (DInf-UFPR) FH-Regensburg 35 / 40
37 Performance Tests Daniel Weingaertner (DInf-UFPR) FH-Regensburg 36 / 40
38 Performance Tests Daniel Weingaertner (DInf-UFPR) FH-Regensburg 37 / 40
39 Summary 1 Introduction 2 Insight Toolkit (ITK) 3 GPGPU and CUDA 4 Integrating CUDA and ITK 5 Canny Edge Detection 6 Experimental Results 7 Conclusion Daniel Weingaertner (DInf-UFPR) FH-Regensburg 38 / 40
40 Conclusion Parallel Programming Parallel programming is definitely the way to go. Implement efficient parallel code is demanding. Programmer should know more details about the hardware, especially memory architecture. Canny Filter with CUDA We had a great speedup on the edge detection filter Also noticed that the existing implementation is not efficient There is still a LOT of work if we want to parallelize ITK. Daniel Weingaertner (DInf-UFPR) FH-Regensburg 39 / 40
41 Conclusion Parallel Programming Parallel programming is definitely the way to go. Implement efficient parallel code is demanding. Programmer should know more details about the hardware, especially memory architecture. Canny Filter with CUDA We had a great speedup on the edge detection filter Also noticed that the existing implementation is not efficient There is still a LOT of work if we want to parallelize ITK. Daniel Weingaertner (DInf-UFPR) FH-Regensburg 39 / 40
42 Contact Thank You! Daniel Weingaertner Daniel Weingaertner (DInf-UFPR) FH-Regensburg 40 / 40
GPU Parallel Computing Architecture and CUDA Programming Model
GPU Parallel Computing Architecture and CUDA Programming Model John Nickolls Outline Why GPU Computing? GPU Computing Architecture Multithreading and Arrays Data Parallel Problem Decomposition Parallel
Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming
Overview Lecture 1: an introduction to CUDA Mike Giles [email protected] hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.
IMAGE PROCESSING WITH CUDA
IMAGE PROCESSING WITH CUDA by Jia Tse Bachelor of Science, University of Nevada, Las Vegas 2006 A thesis submitted in partial fulfillment of the requirements for the Master of Science Degree in Computer
CUDA Basics. Murphy Stein New York University
CUDA Basics Murphy Stein New York University Overview Device Architecture CUDA Programming Model Matrix Transpose in CUDA Further Reading What is CUDA? CUDA stands for: Compute Unified Device Architecture
Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model
Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA Part 1: Hardware design and programming model Amin Safi Faculty of Mathematics, TU dortmund January 22, 2016 Table of Contents Set
Introduction to GPU hardware and to CUDA
Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 37 Course outline Introduction to GPU hardware
GPGPU Parallel Merge Sort Algorithm
GPGPU Parallel Merge Sort Algorithm Jim Kukunas and James Devine May 4, 2009 Abstract The increasingly high data throughput and computational power of today s Graphics Processing Units (GPUs), has led
Graphics Cards and Graphics Processing Units. Ben Johnstone Russ Martin November 15, 2011
Graphics Cards and Graphics Processing Units Ben Johnstone Russ Martin November 15, 2011 Contents Graphics Processing Units (GPUs) Graphics Pipeline Architectures 8800-GTX200 Fermi Cayman Performance Analysis
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Introduction to GP-GPUs. Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1
Introduction to GP-GPUs Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1 GPU Architectures: How do we reach here? NVIDIA Fermi, 512 Processing Elements (PEs) 2 What Can It Do?
ANALYSIS OF RSA ALGORITHM USING GPU PROGRAMMING
ANALYSIS OF RSA ALGORITHM USING GPU PROGRAMMING Sonam Mahajan 1 and Maninder Singh 2 1 Department of Computer Science Engineering, Thapar University, Patiala, India 2 Department of Computer Science Engineering,
ultra fast SOM using CUDA
ultra fast SOM using CUDA SOM (Self-Organizing Map) is one of the most popular artificial neural network algorithms in the unsupervised learning category. Sijo Mathew Preetha Joy Sibi Rajendra Manoj A
~ Greetings from WSU CAPPLab ~
~ Greetings from WSU CAPPLab ~ Multicore with SMT/GPGPU provides the ultimate performance; at WSU CAPPLab, we can help! Dr. Abu Asaduzzaman, Assistant Professor and Director Wichita State University (WSU)
Introduction to GPU Computing
Matthis Hauschild Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Technische Aspekte Multimodaler Systeme December 4, 2014 M. Hauschild - 1 Table of Contents 1. Architecture
Programming GPUs with CUDA
Programming GPUs with CUDA Max Grossman Department of Computer Science Rice University [email protected] COMP 422 Lecture 23 12 April 2016 Why GPUs? Two major trends GPU performance is pulling away from
Lecture 1: an introduction to CUDA
Lecture 1: an introduction to CUDA Mike Giles [email protected] Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Overview hardware view software view CUDA programming
GPU Computing - CUDA
GPU Computing - CUDA A short overview of hardware and programing model Pierre Kestener 1 1 CEA Saclay, DSM, Maison de la Simulation Saclay, June 12, 2012 Atelier AO and GPU 1 / 37 Content Historical perspective
E6895 Advanced Big Data Analytics Lecture 14:! NVIDIA GPU Examples and GPU on ios devices
E6895 Advanced Big Data Analytics Lecture 14: NVIDIA GPU Examples and GPU on ios devices Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist,
CUDA programming on NVIDIA GPUs
p. 1/21 on NVIDIA GPUs Mike Giles [email protected] Oxford University Mathematical Institute Oxford-Man Institute for Quantitative Finance Oxford eresearch Centre p. 2/21 Overview hardware view
Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA
Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA Dissertation submitted in partial fulfillment of the requirements for the degree of Master of Technology, Computer Engineering by Amol
NVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist
NVIDIA CUDA Software and GPU Parallel Computing Architecture David B. Kirk, Chief Scientist Outline Applications of GPU Computing CUDA Programming Model Overview Programming in CUDA The Basics How to Get
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data Amanda O Connor, Bryan Justice, and A. Thomas Harris IN52A. Big Data in the Geosciences:
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR Frédéric Kuznik, frederic.kuznik@insa lyon.fr 1 Framework Introduction Hardware architecture CUDA overview Implementation details A simple case:
L20: GPU Architecture and Models
L20: GPU Architecture and Models scribe(s): Abdul Khalifa 20.1 Overview GPUs (Graphics Processing Units) are large parallel structure of processing cores capable of rendering graphics efficiently on displays.
Planning Your Installation or Upgrade
Planning Your Installation or Upgrade Overview This chapter contains information to help you decide what kind of Kingdom installation and database configuration is best for you. If you are upgrading your
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs Gregorio Bernabé Javier Cuenca Domingo Giménez Universidad de Murcia Scientific Computing and Parallel Programming Group XXIX Simposium Nacional de la
GeoImaging Accelerator Pansharp Test Results
GeoImaging Accelerator Pansharp Test Results Executive Summary After demonstrating the exceptional performance improvement in the orthorectification module (approximately fourteen-fold see GXL Ortho Performance
Embedded Systems: map to FPGA, GPU, CPU?
Embedded Systems: map to FPGA, GPU, CPU? Jos van Eijndhoven [email protected] Bits&Chips Embedded systems Nov 7, 2013 # of transistors Moore s law versus Amdahl s law Computational Capacity Hardware
Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected].
Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected] Outline Motivation why do we need GPUs? Past - how was GPU programming
Cross-Platform GP with Organic Vectory BV Project Services Consultancy Services Expertise Markets 3D Visualization Architecture/Design Computing Embedded Software GIS Finance George van Venrooij Organic
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
NVIDIA GeForce GTX 580 GPU Datasheet
NVIDIA GeForce GTX 580 GPU Datasheet NVIDIA GeForce GTX 580 GPU Datasheet 3D Graphics Full Microsoft DirectX 11 Shader Model 5.0 support: o NVIDIA PolyMorph Engine with distributed HW tessellation engines
GPU-BASED TUNING OF QUANTUM-INSPIRED GENETIC ALGORITHM FOR A COMBINATORIAL OPTIMIZATION PROBLEM
GPU-BASED TUNING OF QUANTUM-INSPIRED GENETIC ALGORITHM FOR A COMBINATORIAL OPTIMIZATION PROBLEM Robert Nowotniak, Jacek Kucharski Computer Engineering Department The Faculty of Electrical, Electronic,
CUDA Debugging. GPGPU Workshop, August 2012. Sandra Wienke Center for Computing and Communication, RWTH Aachen University
CUDA Debugging GPGPU Workshop, August 2012 Sandra Wienke Center for Computing and Communication, RWTH Aachen University Nikolay Piskun, Chris Gottbrath Rogue Wave Software Rechen- und Kommunikationszentrum
Applications to Computational Financial and GPU Computing. May 16th. Dr. Daniel Egloff +41 44 520 01 17 +41 79 430 03 61
F# Applications to Computational Financial and GPU Computing May 16th Dr. Daniel Egloff +41 44 520 01 17 +41 79 430 03 61 Today! Why care about F#? Just another fashion?! Three success stories! How Alea.cuBase
Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.
Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview
Microsoft Office Outlook 2013: Part 1
Microsoft Office Outlook 2013: Part 1 Course Specifications Course Length: 1 day Overview: Email has become one of the most widely used methods of communication, whether for personal or business communications.
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
Image Processing & Video Algorithms with CUDA
Image Processing & Video Algorithms with CUDA Eric Young & Frank Jargstorff 8 NVIDIA Corporation. introduction Image processing is a natural fit for data parallel processing Pixels can be mapped directly
GPUs for Scientific Computing
GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles [email protected] Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research
Implementation of Stereo Matching Using High Level Compiler for Parallel Computing Acceleration
Implementation of Stereo Matching Using High Level Compiler for Parallel Computing Acceleration Jinglin Zhang, Jean François Nezan, Jean-Gabriel Cousin, Erwan Raffin To cite this version: Jinglin Zhang,
Introduction to CUDA C
Introduction to CUDA C What is CUDA? CUDA Architecture Expose general-purpose GPU computing as first-class capability Retain traditional DirectX/OpenGL graphics performance CUDA C Based on industry-standard
ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU
Computer Science 14 (2) 2013 http://dx.doi.org/10.7494/csci.2013.14.2.243 Marcin Pietroń Pawe l Russek Kazimierz Wiatr ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Abstract This paper presents
CUDA. Multicore machines
CUDA GPU vs Multicore computers Multicore machines Emphasize multiple full-blown processor cores, implementing the complete instruction set of the CPU The cores are out-of-order implying that they could
Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries
Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries Shin Morishima 1 and Hiroki Matsutani 1,2,3 1Keio University, 3 14 1 Hiyoshi, Kohoku ku, Yokohama, Japan 2National Institute
Autodesk Revit 2016 Product Line System Requirements and Recommendations
Autodesk Revit 2016 Product Line System Requirements and Recommendations Autodesk Revit 2016, Autodesk Revit Architecture 2016, Autodesk Revit MEP 2016, Autodesk Revit Structure 2016 Minimum: Entry-Level
GPU Accelerated Monte Carlo Simulations and Time Series Analysis
GPU Accelerated Monte Carlo Simulations and Time Series Analysis Institute of Physics, Johannes Gutenberg-University of Mainz Center for Polymer Studies, Department of Physics, Boston University Artemis
Intro to GPU computing. Spring 2015 Mark Silberstein, 048661, Technion 1
Intro to GPU computing Spring 2015 Mark Silberstein, 048661, Technion 1 Serial vs. parallel program One instruction at a time Multiple instructions in parallel Spring 2015 Mark Silberstein, 048661, Technion
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data Amanda O Connor, Bryan Justice, and A. Thomas Harris IN52A. Big Data in the Geosciences:
Turbomachinery CFD on many-core platforms experiences and strategies
Turbomachinery CFD on many-core platforms experiences and strategies Graham Pullan Whittle Laboratory, Department of Engineering, University of Cambridge MUSAF Colloquium, CERFACS, Toulouse September 27-29
Debugging CUDA Applications Przetwarzanie Równoległe CUDA/CELL
Debugging CUDA Applications Przetwarzanie Równoległe CUDA/CELL Michał Wójcik, Tomasz Boiński Katedra Architektury Systemów Komputerowych Wydział Elektroniki, Telekomunikacji i Informatyki Politechnika
Introduction to GPGPU. Tiziano Diamanti [email protected]
[email protected] Agenda From GPUs to GPGPUs GPGPU architecture CUDA programming model Perspective projection Vectors that connect the vanishing point to every point of the 3D model will intersecate
Introduction to GPU Programming Languages
CSC 391/691: GPU Programming Fall 2011 Introduction to GPU Programming Languages Copyright 2011 Samuel S. Cho http://www.umiacs.umd.edu/ research/gpu/facilities.html Maryland CPU/GPU Cluster Infrastructure
High Performance Matrix Inversion with Several GPUs
High Performance Matrix Inversion on a Multi-core Platform with Several GPUs Pablo Ezzatti 1, Enrique S. Quintana-Ortí 2 and Alfredo Remón 2 1 Centro de Cálculo-Instituto de Computación, Univ. de la República
OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA
OpenCL Optimization San Jose 10/2/2009 Peng Wang, NVIDIA Outline Overview The CUDA architecture Memory optimization Execution configuration optimization Instruction optimization Summary Overall Optimization
Experiences on using GPU accelerators for data analysis in ROOT/RooFit
Experiences on using GPU accelerators for data analysis in ROOT/RooFit Sverre Jarp, Alfio Lazzaro, Julien Leduc, Yngve Sneen Lindal, Andrzej Nowak European Organization for Nuclear Research (CERN), Geneva,
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
Texture Cache Approximation on GPUs
Texture Cache Approximation on GPUs Mark Sutherland Joshua San Miguel Natalie Enright Jerger {suther68,enright}@ece.utoronto.ca, [email protected] 1 Our Contribution GPU Core Cache Cache
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
Several tips on how to choose a suitable computer
Several tips on how to choose a suitable computer This document provides more specific information on how to choose a computer that will be suitable for scanning and postprocessing of your data with Artec
Parallel Computing with MATLAB
Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best
System Requirements. Maximizer CRM Enterprise Edition System Requirements
System Requirements A typical Maximizer CRM implementation consists of a server and one or more workstations. The hardware and software requirements for each type of Maximizer installation are listed in
Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) [email protected] http://www.mzahran.com
CSCI-GA.3033-012 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) [email protected] http://www.mzahran.com Modern GPU
Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
GPU Hardware and Programming Models. Jeremy Appleyard, September 2015
GPU Hardware and Programming Models Jeremy Appleyard, September 2015 A brief history of GPUs In this talk Hardware Overview Programming Models Ask questions at any point! 2 A Brief History of GPUs 3 Once
CUDA SKILLS. Yu-Hang Tang. June 23-26, 2015 CSRC, Beijing
CUDA SKILLS Yu-Hang Tang June 23-26, 2015 CSRC, Beijing day1.pdf at /home/ytang/slides Referece solutions coming soon Online CUDA API documentation http://docs.nvidia.com/cuda/index.html Yu-Hang Tang @
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
GPGPU Computing. Yong Cao
GPGPU Computing Yong Cao Why Graphics Card? It s powerful! A quiet trend Copyright 2009 by Yong Cao Why Graphics Card? It s powerful! Processor Processing Units FLOPs per Unit Clock Speed Processing Power
Clustering Billions of Data Points Using GPUs
Clustering Billions of Data Points Using GPUs Ren Wu [email protected] Bin Zhang [email protected] Meichun Hsu [email protected] ABSTRACT In this paper, we report our research on using GPUs to accelerate
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC
ArcGIS Pro: Virtualizing in Citrix XenApp and XenDesktop. Emily Apsey Performance Engineer
ArcGIS Pro: Virtualizing in Citrix XenApp and XenDesktop Emily Apsey Performance Engineer Presentation Overview What it takes to successfully virtualize ArcGIS Pro in Citrix XenApp and XenDesktop - Shareable
Parallel Firewalls on General-Purpose Graphics Processing Units
Parallel Firewalls on General-Purpose Graphics Processing Units Manoj Singh Gaur and Vijay Laxmi Kamal Chandra Reddy, Ankit Tharwani, Ch.Vamshi Krishna, Lakshminarayanan.V Department of Computer Engineering
Learn CUDA in an Afternoon: Hands-on Practical Exercises
Learn CUDA in an Afternoon: Hands-on Practical Exercises Alan Gray and James Perry, EPCC, The University of Edinburgh Introduction This document forms the hands-on practical component of the Learn CUDA
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
PARALLEL JAVASCRIPT. Norm Rubin (NVIDIA) Jin Wang (Georgia School of Technology)
PARALLEL JAVASCRIPT Norm Rubin (NVIDIA) Jin Wang (Georgia School of Technology) JAVASCRIPT Not connected with Java Scheme and self (dressed in c clothing) Lots of design errors (like automatic semicolon
High Performance Cloud: a MapReduce and GPGPU Based Hybrid Approach
High Performance Cloud: a MapReduce and GPGPU Based Hybrid Approach Beniamino Di Martino, Antonio Esposito and Andrea Barbato Department of Industrial and Information Engineering Second University of Naples
How to choose a suitable computer
How to choose a suitable computer This document provides more specific information on how to choose a computer that will be suitable for scanning and post-processing your data with Artec Studio. While
Choosing a Computer for Running SLX, P3D, and P5
Choosing a Computer for Running SLX, P3D, and P5 This paper is based on my experience purchasing a new laptop in January, 2010. I ll lead you through my selection criteria and point you to some on-line
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,
FPGA-based Multithreading for In-Memory Hash Joins
FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded
GPU Architectures. A CPU Perspective. Data Parallelism: What is it, and how to exploit it? Workload characteristics
GPU Architectures A CPU Perspective Derek Hower AMD Research 5/21/2013 Goals Data Parallelism: What is it, and how to exploit it? Workload characteristics Execution Models / GPU Architectures MIMD (SPMD),
Spatial BIM Queries: A Comparison between CPU and GPU based Approaches
Technische Universität München Faculty of Civil, Geo and Environmental Engineering Chair of Computational Modeling and Simulation Prof. Dr.-Ing. André Borrmann Spatial BIM Queries: A Comparison between
Guided Performance Analysis with the NVIDIA Visual Profiler
Guided Performance Analysis with the NVIDIA Visual Profiler Identifying Performance Opportunities NVIDIA Nsight Eclipse Edition (nsight) NVIDIA Visual Profiler (nvvp) nvprof command-line profiler Guided
OpenACC Basics Directive-based GPGPU Programming
OpenACC Basics Directive-based GPGPU Programming Sandra Wienke, M.Sc. [email protected] Center for Computing and Communication RWTH Aachen University Rechen- und Kommunikationszentrum (RZ) PPCES,
Microsoft Office Outlook 2010: Level 1
Microsoft Office Outlook 2010: Level 1 Course Specifications Course length: 8 hours Course Description Course Objective: You will use Outlook to compose and send email, schedule appointments and meetings,
String Matching on a multicore GPU using CUDA
String Matching on a multicore GPU using CUDA Charalampos S. Kouzinopoulos and Konstantinos G. Margaritis Parallel and Distributed Processing Laboratory Department of Applied Informatics, University of
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K [email protected], [email protected] Abstract In this project we propose a method to improve
INTRODUCTION TO WINDOWS 7
INTRODUCTION TO WINDOWS 7 Windows 7 Editions There are six different Windows 7 editions: Starter Home Basic Home Premium Professional Enterprise Ultimate Starter Windows 7 Starter edition does not support
NVIDIA CUDA GETTING STARTED GUIDE FOR MICROSOFT WINDOWS
NVIDIA CUDA GETTING STARTED GUIDE FOR MICROSOFT WINDOWS DU-05349-001_v6.0 February 2014 Installation and Verification on TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2.
GPGPU accelerated Computational Fluid Dynamics
t e c h n i s c h e u n i v e r s i t ä t b r a u n s c h w e i g Carl-Friedrich Gauß Faculty GPGPU accelerated Computational Fluid Dynamics 5th GACM Colloquium on Computational Mechanics Hamburg Institute
The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices
WS on Models, Algorithms and Methodologies for Hierarchical Parallelism in new HPC Systems The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices
A general-purpose virtualization service for HPC on cloud computing: an application to GPUs
A general-purpose virtualization service for HPC on cloud computing: an application to GPUs R.Montella, G.Coviello, G.Giunta* G. Laccetti #, F. Isaila, J. Garcia Blas *Department of Applied Science University
