How To Share Rendering Load In A Computer Graphics System

Similar documents
The Design and Implement of Ultra-scale Data Parallel. In-situ Visualization System

Remote Graphical Visualization of Large Interactive Spatial Data

Parallel Visualization for GIS Applications

Equalizer. Parallel OpenGL Application Framework. Stefan Eilemann, Eyescale Software GmbH

Stream Processing on GPUs Using Distributed Multimedia Middleware

REMOTE RENDERING OF COMPUTER GAMES

OctaVis: A Simple and Efficient Multi-View Rendering System

Multi-GPU Load Balancing for In-situ Visualization

Final Project Report. Trading Platform Server

An Interactive Dynamic Tiled Display System

2 Paper 1022 / Dynamic load balancing for parallel volume rendering

Dynamic resource management for energy saving in the cloud computing environment

NVIDIA IndeX Enabling Interactive and Scalable Visualization for Large Data Marc Nienhaus, NVIDIA IndeX Engineering Manager and Chief Architect

Advanced Rendering for Engineering & Styling

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM

GeoImaging Accelerator Pansharp Test Results

Petascale Visualization: Approaches and Initial Results

Cluster Computing at HRI

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

Parallel Simplification of Large Meshes on PC Clusters

CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES

Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.

An Efficient Application Virtualization Mechanism using Separated Software Execution System

LCMON Network Traffic Analysis

Parallel Firewalls on General-Purpose Graphics Processing Units

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

An Active Packet can be classified as

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE.,

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Multi-view Rendering Approach for Cloud-based Gaming Services

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

Intel DPDK Boosts Server Appliance Performance White Paper

Mobile Multimedia Meet Cloud: Challenges and Future Directions

GPGPU for Real-Time Data Analytics: Introduction. Nanyang Technological University, Singapore 2

VISUAL ENVIRONMENT WITH HIGH RESOLUTION TILED DISPLAY AND PC RENDERING CLUSTER

Hadoop Technology for Flow Analysis of the Internet Traffic

A Slow-sTart Exponential and Linear Algorithm for Energy Saving in Wireless Networks

Efficient Data Replication Scheme based on Hadoop Distributed File System

A Study of Network Security Systems

MEASURING PERFORMANCE OF DYNAMIC LOAD BALANCING ALGORITHMS IN DISTRIBUTED COMPUTING APPLICATIONS

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology.

Cloud Computing for Agent-based Traffic Management Systems

Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data

SCHEDULING IN CLOUD COMPUTING

Efficient Cloud Management for Parallel Data Processing In Private Cloud

Understanding the Benefits of IBM SPSS Statistics Server

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Cloud gaming and simulation in Distributed Systems. GingFung Matthew Yeung. BSc Computer Science 2014/15

Distributed Dynamic Load Balancing for Iterative-Stencil Applications

High Performance Computing in CST STUDIO SUITE

NVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist

ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU

A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

Multi-GPU Load Balancing for Simulation and Rendering

How To Build A Cloud Computer

Efficient Load Balancing using VM Migration by QEMU-KVM

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Dynamic Resolution Rendering

Computer Graphics Hardware An Overview

Enhancing Cloud-based Servers by GPU/CPU Virtualization Management

Software Distributed Shared Memory Scalability and New Applications

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

Accelerating BIRCH for Clustering Large Scale Streaming Data Using CUDA Dynamic Parallelism

Saving Mobile Battery Over Cloud Using Image Processing

Implementation of the Remote Control and Management System. in the Windows O.S

Scalability and Classifications

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

NVIDIA IndeX. Whitepaper. Document version June 2013

How To Balance In Cloud Computing

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

GPU File System Encryption Kartik Kulkarni and Eugene Linkov

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network

Xeon+FPGA Platform for the Data Center

Understanding the Performance of an X User Environment

High-resolution multi-projector display walls and applications

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

Map-Parallel Scheduling (mps) using Hadoop environment for job scheduler and time span for Multicore Processors

Proposal of Dynamic Load Balancing Algorithm in Grid System

Citrix & Terminal Services Considerations (Ascent Capture 7.5 Enterprise with Citrix & Terminal Services FAQs)

Transcription:

Bottlenecks in Distributed Real-Time Visualization of Huge Data on Heterogeneous Systems Gökçe Yıldırım Kalkan Simsoft Bilg. Tekn. Ltd. Şti. Ankara, Turkey Email: gokce@simsoft.com.tr Veysi İşler Dept. of Computer Engineering Middle East Technical University, Ankara, Turkey Email: isler@ceng.metu.edu.tr Abstract In computer graphics, generating high-quality images at high rates for rendering complex scenes from huge data is a challenging task. A practical solution to tackling this problem is distributing the rendering load among heterogeneous computers or graphical processing units. In this article, we investigate the bottlenecks in a single processing unit or the connection between the several processing units that need to be overcome in such approaches. We provide simulation results for these bottlenecks and outline guidelines that can be useful for other researchers working on distributed real-time visualization of huge data. I. INTRODUCTION Rendering high-resolution and large-data involving complex objects in real-time applications such as training simulators is an extant challenge in Computer Graphics. Despite the advances in graphics hardware, because of the memory constraints and bandwidth bottlenecks, visualization of huge data is very difficult or impossible on a single graphics system. At this point, scalability of graphics systems and so parallel rendering on distributed systems plays an important role for improving the performance of computer graphics software. In other words, the rendering workload should be distributed among many processing units that are connected either in a single computer (cluster) or a computer network. There exist many studies for distributed rendering of huge data, as reviewed in Section I-A. However, these studies either (i) use specialized hardware or computer networks (e.g., optic network [1]) that are not affordable in small and cheap visualization systems, or (ii) employ algorithmic solutions for distributing the primitives-to-be-rendered, which are mostly unrealistic [2], e.g. many body problem [3], or not interactive [4]. In this article, we investigate the bottlenecks in realtime visualization of complex data with standard computers connected with a standard network. A. Related Studies As outlined in Figure 1, for distributed rendering, there are two main approaches: inter- and intra- based distribution. In inter- rendering, individual s are generated by the processing units and the generated s are combined to a single display unit. On the other hand, in intra rendering, parts of a or a scene are distributed. According to what is distributed, intra- rendering can be of three types: sort-first, sort-middle and sort-last classes [5]. There are various APIs which facilitate one of these approaches. Some of these are WireGL [6], Chromium [7], Multi OpenGL Multipipe SDK (MPK) [8] and Equalizer [9]. Among these APIs, Equalizer provides a much more scalable, flexible and compatible interface with less implementation overhead [1]. In the sort-first approach, a camera view is divided into several subviews, or partitions, and each subview is assign to a processing unit [11], [12], [13] - see also Figure 1. The outputs of the processing units are easily tiled together by a server for a final display device [5]. This approach is dependent on the size of the input dataset while it is irrespective of image resolution. Therefore, it is best to use this approach for applications whose final rate decreases with the increasing image resolution. In the sort-last approach, the scene is partitioned into sets of objects or entities and these sets are shared among the processing units [11], [14] - see also Figure 1. The outputs of the processing units are only snapshots of only a part of the scene, and they are combined by a server possibly with a postprocessing stage, where the depths of the individual objects are taken into account for a coherent snapshot of the scene [5]. This approach is dependent on image resolution while it is irrespective of input dataset. Therefore, this approach is well suited for applications whose final rate decreases with the increasing 3D input database. In the sort-middle approach, a hybrid of sort-last and sortfirst is performed, where parts of the view as well as the objects in sub-views are distributed [11], [15]. B. This Study Different from the real-time interactive distributed visualization systems which are dependent on very fast network infrastructures, our work focuses on real-time interactive distributed visualization systems on standard 1 Gbit network infrastructures and proposes design guidelines for different load balancing strategies for such architectures. We implement and compare two load balancing methods against no load balancing: The first method distributes load among a set of computers whereas the second method distributes the load on the same computer among many GPUs. II. OUR LOAD BALANCING STRATEGIES In this study, to share the rendering load and achieve higher refresh rates, we propose load balancing strategies for distributed network environment and local distribution strategies

Rendering Types Intra- Rendering Inter- Rendering Sort-first Sort-last Load Balancer scene IG1 IG2 IGN IG1 IG2 IGN IG1 IG2 IGN Server Display Server Display Server Display Fig. 1. Overview of the different approaches to distributed rendering. and compare their results with scenarios which do not use load balancing strategies. Network Method (NSM) is based on sharing rendering load on among distributed computers whereas GPU Method (GSM) is based on sharing rendering load in the same computer locally. A. Rendering with a network of computers (Network Method - NSM) In this method, load is distributed among a network of computers. The distributed environment consists of the Central Control Computer and a number of Image Generators (IGs). An IG is an image generator computer which renders the scene. The Central Control Computer is the master computer which listens to the states of the IGs in terms of refresh rate and decides to give IGs commands to help other IGs rendering their s if their refresh rate is under a threshold. The flow is processed as shown in Figure 2. The Central Control Computer commands IG 1 to render the for IG 2. When IG 1 completes IG 2 s, it sends the over fast network to IG 2. IG 2 receives the and renders. The decision for distributing load among IGs is given by central control computer according to the current refresh rates. B. Rendering locally among GPUs (GPU Method - GSM) In this method, the load is distributed on the same computer among different Graphical Processing Units (GPUs). The IG application is responsible for rendering the scene. In this method, the flow is processed as shown in Figure 3. The IG application commands helper GPU (GP U 2 ) to render the which includes post-processing effects such as particles for the main GPU (GP U 1 ). GP U 1 is the GPU which is responsible for rendering all the scene. When GP U 2 completes the, it shares the with GP U 1 via shared memory of the IG. In the meantime, GP U 1 renders the main scene and blends the which includes post-processing effects such as particles rendered by GP U 2. In this method, the decision for load balancing is made locally in the IG by the IG application according to the state of GP U 1 in terms of refresh rate. When the refresh rate is under a threshold, IG application decides that the main GPU distribute its load to the helper GPU. The output of the main GPU is used as the IG output. The output of the helper GPU is only used as in input for the main GPU.

Central Control Computer IG 1 IG 2 1) Command_Render (_for_ig 2 ) 2) Command_Wait_For_IG1 (_for_ig 2 ) Render (_for_ig 2 ) Wait (_for_i G 2 ) 3) _for_ig 2 Render (_from_ IG 1 ) Fig. 2. Flowchart of the scenario of NSM. IG IG Application GPU 2 (helper GPU) GPU 1 (main GPU) 1) Command_Render (_for_gpu 1 ) 2) Command_Wait_For_GPU2 (_for_gpu 1 ) Render (_for_gpu 1 ) 3) _for_gpu 1 via memory Render main scene Blend (_from_g PU 2 ) Fig. 3. Flowchart of the scenario of GSM. III. EXPERIMENTS AND RESULTS The experiments were performed on a network of computers each of which contains two GTX 68 graphic cards (2GB RAM, 256 Bit), Intel i7 27K processors and 16GB main memory - see Figure 5 for an outline of the experimental environment. The scene was selected as a large terrain in which some dust is scattered as particles in some regions of the terrain (see Figure 4 for a detailed snapshot). Each IG controls a window which has a camera view from the same viewpoint with contiguous field of views so that we can have a large field of view for the scene. A. Network Method In the network, there is significant latency due to transmission of a packet, which can lead to incoherence in displayed s. Two solutions to this problem are: (i) Block the IG receiving help until the next in sequence arrives. (ii) Send the s to the slow IG latest at t t N, where t N is the delay due to network transmission and the related processing. In this section, we will have a look at both. In any case, for load distribution to be worth the effort for an IG (IG 1 ) that needs help, the time for rendering one should be costing at least the time for a packet to travel on network and the rendering time for the helper IG (IG 2 ): t IG1 > t IG2 + t N, (1) where t IG1 and t IG2 are rendering times for IG 1 and IG 2, respectively. The decision for the Central Control Computer should be based on the rendering times of IG 1 and IG 2 according to the criteria in Equation 1. In addition, for an application that needs to achieve 25 fps, the time for the helper IG (IG 2 ) to render one for the other IG should be smaller than the time for rendering its own (4ms in the case of 25 fps) minus the time it takes helper IG to render the other IG s : t IG + t OF < 4ms, (2) where t IG shows the time of rendering its own for a helper IG, and t OF is the time it takes a helper IG to render the other IG s. Based on this result, we can conclude that the decision for the Central Control Computer should also be based on the render times of helper IGs according to the criteria in Equation 2. In Figure 6, we analyze the effect of the received help on the refresh rate of the IG receiving help. In these results, the helping GPU sends a every 5ms. In the blocking case, we see that, if the IG is fast enough, the received help cannot increase its refresh rate because getting help means waiting for a packet from the network, which costs more than rendering the itself. In the non-blocking case, however, whatever how fast the IG is, receiving help increases its refresh rate. However, the amount of increase decreases when the speed increases. In Figure 7, we see the effect of the Network Method on the refresh rate of the helping IG. We see that

Fig. 4. A few snapshots from the rendered scenes. Simulation LAN Rackmount Pcs Operator station (monitor, mouse, keyboard) Network switch KVM box Central Control Computer Gigabit lg LAN Data Server IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 Fig. 5. Experiment Environment.

6 55 5 After Network 45 4 35 3 25 2 2 25 3 35 4 45 5 55 6 Before Network Network (a) Blocking 7 6 After Network 5 4 3 2 Network 1 2 25 3 35 4 45 5 55 6 Before Network (b) Non-blocking Fig. 6. The effect of Network Method on the IG receiving help. (a) With blocking the IG receiving help, (b) Non-blocking the IG receiving help. the helping IG is only slightly affected. This is due to the fact that the IG prepares and sends s over the network in a separate thread than the one rendering the scene. B. GPU Method In Figure 8, we analyze the effect of the received help on the refresh rate of the GPU receiving help. The helping GPU sends every to the helpee GPU. We observe that, whatever the slow GPU s rate is, the help from the fast GPU leads to approximately similar rates, both in the blocking and the non-blocking cases. This is due to the fact that shared memory allows very fast transfer of to the helpee GPU and the helpee GPU does not get much chance to render a itself. Therefore, Figure 8 shows us that GPU- GPU sharing has a limit, no matter the original speed of the helpee GPU. In Figure 9, we see the effect of the GPU Method on the refresh rate of the helping GPU. We see that the helping GPU is very much affected. This is due to the fact that the time spent for the helping GPU to copy the rendered data to the shared memory is roughly three times than rendering a scenes. C. Guidelines Based on the results provided in this section, we provide the following guidelines: 1) Although the distributed rendering literature has mostly focused on intra- load distribution strategies, inter- rendering is still plausible despite the network latency. 2) For overcoming the network latency problem, several strategies can be adopted based on the demands of the rendering problem. One is blocking the rendering node while waiting for the from another IG, and the other approach is non-blocking the IG receiving help, and taking care of the coherence issue by making sure that the s are sent at least t N before the IG finishes rendering its own. 3) GPU-GPU sharing is much faster than IG-IG sharing. The latency problem, though less severe, exists for

After Network 18 16 14 12 1 8 6 4 2 5 1 15 2 Before Network Network sharing Fig. 7. The effect of Network Method on the IG giving help. After GPU 3 25 2 15 1 5 GPU 2 4 6 8 1 Before GPU (a) Blocking After GPU 3 25 2 15 1 5 GPU 2 4 6 8 1 Before GPU (b) Non-blocking Fig. 8. The effect of GPU Method on the GPU receiving help. (a) With blocking the GPU receiving help, (b) Non-blocking the GPU receiving help.

After GPU 35 3 25 2 15 1 5 1 2 3 4 Before GPU GPU Fig. 9. The effect of GPU Method on the GPU giving help. GPU-GPU communication as well. The same solutions proposed for IG-IG communication apply to GPU-GPU latency. 4) For enhanced distributed rendering, IG-IG and GPU- GPU distributed rendering should both be utilized. IV. CONCLUSION With the desire to visualize huge data or simulate complex scenes in high resolution, it has become a necessity to use parallel and distributed rendering techniques or architectures for fast, real-time, interactive simulation systems. Existing approaches either share individual s (inter- methods) or parts of a single (sort-first, sort-last and sortmiddle methods), and generally, they use advanced hardware connected together in very expensive ultra fast networks. The article has investigated the bottlenecks in parallel and distributed rendering systems with simulations. It has shown that in a locally distributed rendering system, the transfer from one GPU to the other needs to go over the CPU and the memory, which is a limiting factor. Moreover, for distributed rendering using a network of computers, the network speed is a bottleneck. We argue that, under these bottlenecks, rendering can be distributed provided that the rendering speed of a processing unit is slow enough to compensate for the time delay for the data transfer, either in the computer or in the network. ACKNOWLEDGMENT This work is partially funded by the Ministry of Science under project number SANTEZ 96.STZ.211-1. Moreover, we would like to thank Simsoft for providing us the software and the hardware environment for testing algorithms developed by this work. REFERENCES [1] R. E. De Grande and A. Boukerche, A dynamic, distributed, hierarchical load balancing for hla-based simulations on large-scale environments, in Euro-Par 21-Parallel Processing. Springer, 21, pp. 242 253. [2] S. Marchesin, C. Mongenet, and J.-M. Dischler, Dynamic load balancing for parallel volume rendering, Eurographics Symposium on Parallel Graphics and Visualization, 26. [3] R. Hagan and Y. Cao, Multi-gpu load balancing for in-situ visualization, in International Conference on Parallel and Distributed Processing Techniques and Applications, 211. [4] R. E. De Grande and A. Boukerche, Predictive dynamic load balancing for large-scale hla-based simulations, in Proceedings of the 211 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications. IEEE Computer Society, 211, pp. 4 11. [5] P. Yin, X. Jiang, J. Shi, and R. Zhou, Multi-screen tiled displayed, parallel rendering system for a large terrain dataset, International Journal of Virtual Reality, vol. 5, no. 4, pp. 47 54, 26. [6] G. Humphreys, M. Eldridge, I. Buck, G. Stoll, M. Everett, and P. Hanrahan, Wiregl: a scalable graphics system for clusters, in SIGGRAPH, vol. 1, 21, pp. 129 14. [7] G. Humphreys, M. Houston, R. Ng, R. Frank, S. Ahern, P. D. Kirchner, and J. T. Klosowski, Chromium: a stream-processing work for interactive rendering on clusters, ACM Transactions on Graphics (TOG), vol. 21, no. 3, pp. 693 72, 22. [8] OpenGL-Multipipe, Opengl multipipe TM sdk white paper, document number: 7-4516-3, Sgi Techpubs Library, 24. [9] S. Eilemann, M. Makhinya, and R. Pajarola, Equalizer: A scalable parallel rendering work, IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 3, pp. 436 452, 29. [1] U. Gun, Interactive editing of complex terrains on parallel graphics architectures, M.Sc. Thesis, Middle East Technical University, Department of Computer Engineering, 29. [11] S. Molnar, M. Cox, D. Ellsworth, and H. Fuchs, A sorting classification of parallel rendering, IEEE Computer Graphics and Applications, vol. 14, no. 4, pp. 23 32, 1994. [12] B. Moloney, M. Ament, D. Weiskopf, and T. Moller, Sort-first parallel volume rendering, IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 8, pp. 1164 1177, 211. [13] E. Bethel, G. Humphreys, B. Paul, and J. D. Brederson, Sort-first, distributed memory parallel visualization and rendering, in Proceedings of the 23 IEEE symposium on parallel and large-data visualization and graphics. IEEE Computer Society, 23, p. 7. [14] T. Fogal, H. Childs, S. Shankar, J. Krüger, R. D. Bergeron, and P. Hatcher, Large data visualization on distributed memory multigpu clusters, in Proceedings of the Conference on High Performance Graphics. Eurographics Association, 21, pp. 57 66. [15] R. Samanta, T. Funkhouser, K. Li, and J. P. Singh, Hybrid sort-first and sort-last parallel rendering with a cluster of pcs, in Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware. ACM, 2, pp. 97 18.