A Framework for Creating a Distributed Rendering Environment on the Compute Clusters
|
|
|
- Esther Douglas
- 9 years ago
- Views:
Transcription
1 (IJACSA) International Journal of Advanced r Science and Applications, A Framework for Creating a Distributed ing Environment on the Clusters Ali Sheharyar IT Research Computing Texas A&M University at Qatar Doha, Qatar Othmane Bouhali Science Program and IT Research Computing Texas A&M University at Qatar Doha, Qatar Abstract This paper discusses the deployment of existing render farm manager in a typical compute cluster environment such as a university. Usually, both a render farm and a compute cluster use different queue managers and assume total control over the physical resources. But, taking out the physical resources from an existing compute cluster in a university-like environment whose primary use of the cluster is to run numerical simulations may not be possible. It can potentially reduce the overall resource utilization in a situation where compute tasks are more than rendering tasks. Moreover, it can increase the system administration cost. In this paper, a framework has been proposed that creates a dynamic distributed rendering environment on top of the compute clusters using existing render farm managers without requiring the physical separation of the resources. Keywords distributed; rendering; animation; render farm; cluster I. INTRODUCTION A. Background ing is a process of generating one or more digital image(s) from a model or a collection of models, characterized as a virtual scene. A virtual scene is described in a scene file that contains the information such as geometry, textures, lights, etc. It is modelled in a 3D modelling application. Most commonly used modelling applications are Blender [1], Autodesk 3D Studio Max [2] and Autodesk Maya [3]. All modelling applications have a user interface with a drawing area where users can create a variety of geometrical objects, manipulate them in various ways, apply textures, and even animate etc. Error! Reference source not found. shows the user interface of Blender 3D modelling application [1]. A virtual scene is then given to the renderer that generates a set of high quality images later to be used to produce the final animation. Some of the most popular renderers are mental ray [4], V-Ray [5] and Blender [1]. ing is a compute-intensive and time-consuming process. ing time for an individual frame may vary from a few seconds to several hours. The rendering time depends on scene complexity, degree of realism (shadows, lights etc.) and output resolution. For example, a short animation project may be two-minutes in length, but at 30 frames per second (fps), it contains 3,600 frames. An average rendering time for a fairly simple frame can be approximately 2 minutes, resulting in a total of 120 hours. Fortunately, rendering is a highly parallelizable task as rendering of individual frames does not depend on any other frame. In order to reduce the total rendering time, rendering of individual frames can be distributed to a group of computers on the network. An animation studio, a company dedicated to production of animated films, typically has a cluster of computers dedicated to render the virtual scenes. This cluster of computers is called a render farm. B. Objectives In a university environment, it can be complicated to do the rendering because many researchers do not have access to a dedicated machine for rendering [6]. They do not have access to a rendering machine for a long time as it may become unavailable for other research use. Moreover, they can lose their data due to hardware failure. By creating a distributed rendering environment, these problems can be addressed. However, some universities have one or more compute clusters that are normally used to perform high performance computing tasks. Distributed rendering on a compute cluster is possible, but it requires a lot of manual interaction with a cluster's job scheduler and is cumbersome. Fig.1. Screen shot of the Blender user interface In this paper, a framework has been presented to create a render farm-like environment on a compute cluster by using existing render farm software. This way, researchers and students will be presented with an interface similar to what typically animation studios have. Moreover, it does not require manual interaction with the cluster's job scheduler and makes the rendering workflow smoother. 117 P a g e
2 (IJACSA) International Journal of Advanced r Science and Applications, C. Related Work There has been some work on doing the image rendering on the cluster and on grids [7]. In this section, related work is briefly presented. Huajin and Bing [8] discuss the design and implementation of a render farm manager based on OpenPBS. OpenPBS is a resource managers used for compute clusters. They have proposed to extend the OpenPBS functionality in order to facilitate the render job management. They maintain a state table to hold the render jobs status. They implement a new command "qsubr" to submit the job and another command "qstatr" to monitor the render jobs. They also provide a webbased interface in order to facilitate the job submission and monitoring. Gooding et al. [6] talk about implementation of distributed rendering on diverse platforms rather than a single cluster. They consider utilizing all available resources such as recycled computers, community clusters and even TeraGrid [9]. One benefit of this approach is that it gives access to diverse computing resources, but on the other hand it requires significant changes in the infrastructure. They require adding a couple of new servers to host the software for job submission and distribution to render machines. They also need a new central storage system because it is essential for the network distribution of the render job's resources (textures etc.) so that all render nodes could access it and save the output back. It is obvious that it requires a change in networking infrastructure. They offer only command line interface for job submission and support, only Man rendering engine [10]. Anthony et al. [11] propose a framework of distributed rendering over Grid by following two different approaches. One approach is to setup the portal through which a user can submit a rendering job on-demand and download the rendered images from the remote server at a later time. Another approach is to submit the job to the Grid through a GUI plug-in within the 3D modelling software where every rendered image will be sent back individually to the client concurrently. Both of these approaches require significant effort for implementation. They also talk about compression methods that are beyond the scope of this paper. All of these approaches focus on implementing the render farm manager (or job manager). They provide users a way to interface to submit and control the render jobs in the form of command line using SSH, online web portal and/or plug-in within 3D modelling software. In summary, they need a significant amount of time and resources to implement all the nitty-gritties of various software components. In the next section, a new approach will be proposed. This paper is divided in to several sections. Section II and III give an overview of render farms and compute clusters respectively. Section IV describes the current approaches and proposed approach to render the computer animation in distributed computing environments. Section V presents the experimental results and, finally, section VI and VII presents the conclusion and future work respectively. Fig.2. II. RENDER FARM A render farm is a cluster of computers connected to a shared storage dedicated to render the compute-generated imagery. Usually, a render farm has one master (or head) machine (or node) and multiple slave machines. The head node runs the job scheduler that manages the allocation of resources on slave machines to jobs submitted by users (artists). Error! Reference source not found. shows the client/server architecture of a render farm. In the diagram, Head Node runs the server software of render farm manager, whereas, runs the client software. Fig.3. er (Blender, mental ray etc) Farm Manager (DrQueue, Qube, etc) farm architecture ing workflow A typical rendering workflow (see Error! Reference source not found.) can be described in the following steps: 1) Artists create the virtual scenes on their workstations. 2) Artists store their virtual scene files, textures etc. on the shared storage. 3) Artists submit rendering jobs to the queue manager, a software package running on the head node. 4) Queue manager divides the job into independent tasks and distributes them to slave machines. A task could be rendering of one full image, a few images, or even a subsection (tile) of an image. A job may have to wait in the queue depending on its priority and load on the render farm. 5) Slave machines read the virtual scene and associated data from the shared storage. 6) Slaves render the virtual scene and save the resulting image(s) back on the file server P a g e
3 Interface (RCMS) (IJACSA) International Journal of Advanced r Science and Applications, 7) User is notified of job completion or errors, if any. A render queue manager (also known as render farm manager or job scheduler), typically a client-server package facilitates the communication between the slaves and master. The head node runs the server component whereas all slave nodes run the client component of the render queue manager; although some queue managers have no central manager. Some common features of queue managers are prioritization of queue, management of software licenses, and algorithms to optimize the throughput in the farm. Software licensing handled by the queue manager might involve dynamic allocation of available CPUs or even cores within CPUs. III. COMPUTE CLUSTER A compute cluster is a group of computers linked with each other through a fast local area network. Clusters are used mainly for computational purposes rather than handling the IOoriented operations such as databases or web services. For instance, a cluster might support weather forecast simulation or flow dynamics simulation of a plane wing. A typical compute cluster comprises one head node and multiple compute (or slave) nodes. All clusters run a resource management software package that accepts jobs from users. They preserve them until they are run, run the jobs, and deliver Application (Matlab, Ansys etc) Fig.4. Job Scheduler (PBS etc) cluster architecture the output back to user. Error! Reference source not found. shows the architecture of a compute cluster. runs the server software of the resource manager, whereas, runs client software. A typical workflow to execute a job on a cluster is described below: 1) User prepares the job file that contains some parameters and path to the executable or script to run. For instance, the amount of memory and number of CPU cores required are specified. 2) User submits the job file to the job scheduler. Submission is done usually over SSH terminal but some job schedulers also offer the web interface for job submission. 3) Scheduler puts the job in to appropriate queue. 4) When the job's turn comes, scheduler allocates the resources and starts the execution of executable/script specified in job description on the allocated slave node. 5) When job finishes its execution, the output and error log is saved to disk. The scheduler can terminate a job if its execution time exceeds a predefined amount of time. 6) User is notified of job completion.. IV. RENDERING ON A COMPUTE CLUSTER It is clear that both render farms and compute clusters have similar architecture. Both of them contain one master machine (or head node) and one or more slave machines. Both run a resource manager software package and both have similar workflows. A render farm can be considered as a special kind of compute cluster that uses resource manager and other software (renderers) specific to rendering the computergenerated imagery. This section focuses on running the render-farm ecosystem over a compute cluster. First, current approaches to solve this problem and their limitations have been described. Then, a new approach has been proposed that not only can present existing and familiar interfaces to users but also requires less implementation effort. A. Proposed Cluster-based ing Framework As mentioned above, all of existing work [6][8][11] have focused on developing all components of rendering job management and scheduling from scratch. However, this paper proposes a new approach using existing open-source or commercial render farm managers, meeting the requirements mentioned later, and using the compute cluster's resource with minimal or no change in existing setup. Recall from earlier sections that a render farm manager has client/server architecture. The server software (r-server) runs on the head node whereas the client component (r-client) runs on slave nodes. The key difference between existing approaches and the proposed approach is that current approaches schedule the render jobs submitted by users directly to the cluster or grids and manage their state and execution process themselves. However, the new approach proposes to schedule the client-component of the render farm manager rather than the render jobs directly. The server component can either run on the cluster's head node (compute head node) along with cluster's existing job manager or a new server machine (render head node) that can be added to the existing environment. The render head node also runs a software module called ing on Cluster Meta Scheduler (RCMS) (see Error! Reference source not found.). Fig.5. manager Farm Management Software Cluster Resource Management Software Interface between render farm manager and cluster resource The RCMS scheduling module queries the render farm manager and dynamically adjusts the number of active r-client jobs. Each r-client job appears to the r-server as a render slave node. The number of active r-client jobs depends on the current 119 P a g e
4 (IJACSA) International Journal of Advanced r Science and Applications, load of the compute cluster and the number of render jobs in queue. If there are pending render jobs, RCMS can submit new r-client jobs to the compute cluster. It also kills the active r- client jobs if there is no render job in queue and releases the resources to make them available for compute jobs. It maintains a state table to keep track of r-client jobs submitted to the cluster (see Error! Reference source not found.). Fig.6. RCMS Farm Client State Table The RCMS module should be able to talk to render farm farm integration with the compute cluster Farm Client manager and cluster resource manager in order to keep the rendering system running effectively. Interfacing with cluster managers is trivial because most of cluster managers support at least command line interface for job submission etc. On the other hand, not all render farm managers are compatible with this framework. There is a set of requirements that a render farm has to meet in order to be compatible with the proposed framework. These requirements are described in the following sub-section. B. Farm Manager Compatibility Requirements There are several open-source and commercial render farm managers available. But, in order to be compatible with proposed distributed rendering on cluster framework, it should meet the following requirements: 1) Job control API: farm manager should support a set of calls to query the information about the render jobs submitted by users. This interface can be either commandline programs or API of any programming language such as Python, Ruby, C/C++ etc. 2) Failsafe rendering: The render farm manager should be able to detect failed or incomplete rendered images. As an r- slave job can run only for a fixed amount of time. After that, the cluster job manager will terminate it. It is important that the render farm manager should detect the incomplete rendered images and reschedule them. 3) Automatic client recognition: The server should automatically detect active clients on slave nodes. Clients should send so-called heartbeats to the server, so that the server will automatically know their existence. It is required as clients are expected to be active dynamically over a set of compute nodes in the cluster. 4) Supervisor required: Some render farm managers do not need the server component to manage the resources. However, the proposed approach requires that render farm management software has a supervisor to centrally control the jobs and resources. Error! Reference source not found. shows some of the popular render farm managers along with some features. As it is clear that Smedge and Spider are not compatible with the proposed framework because they do not support either supervisor and/or job control API. Cluster Resource Manager Compatibility Requirements All major cluster resource managers like PBS and LSF support at least SSH over command-line interface for user interaction. Some resource managers also support online webinterface. Proposed distributed rendering framework requires that the cluster resource manager should have support for the following operations via command-line: 1) Job submission: A cluster manager should support job submission via command line. RCMS will prepare a job file that will specify the desired resources like number of cores, memory and execution time. 2) Query jobs: It should support querying the currently active jobs by their names. RCMS will use its own naming scheme in order to identify the r-client jobs. 3) Job deletion: As r-client jobs on cluster will be Name DrQueue Qube! Smedge TABLE I. RENDER QUEUE MANAGEMENT SOFTWARE Supported 3D Applications Supervisor Required? Job Control API Blender, Maya, mental ray, Pixie, command-line tools Yes Yes Maya, mental ray, SoftImage, Man, Shake, commandline tools Yes Yes 3ds max, After Effects, Maya, mental ray, SoftImage No Yes Spider Maya No No Pal ButterflyNetR ender (BNR) 3ds max, Blender, Cinema 4D, Houdini, Maya, mental ray, SoftImage dynamically deleted in order to release the cluster resources to be used for other computation tasks, it is necessary that cluster resource manager support this feature. Yes Yes All major applications and command-line tools Yes Yes 120 P a g e
5 (IJACSA) International Journal of Advanced r Science and Applications, TABLE II. HARDWARE AND SOFTWARE SPECIFICATION OF TEST PLATFORMS Machine Dell 690 Suqoor (single node) HP Z800 CPU 2x Intel Clovertown x Intel Harpertown x Intel Westmere-EP Ghz GHz 2.66 GHz Cores (per CPU) Threads (per CPU) L2 Cache (per CPU) 8 MB 12 MB 12 MB CPU Launch Date Q4'06 Q4'07 Q1'10 Memory 16 GB 32 GB 16 GB Operating System(s) Win XP 64-bit/Cent OS 6 (64-bit) SuSE Linux Enterprise Server 10 (64-bit) Red Hat Enterprise Linux 5 (64-bit) GPU Quadro FX 4600 None Quadro Plex 6000 but have different operating systems. One has Windows XP x64 and other has Cent OS 6. Error! Reference source not found. shows the average rendering time per frame on a single node (8 cores) of Suqoor and other workstations. Note that rendering time on a single compute node of cluster having 8 cores (25.22s) and HP Z800 workstation having 12 cores (25.86s) differs just by a fraction of a second. It has also been observed that rendering on Windows XP is almost 2.76 times slower than CentOS Linux on the same hardware configuration. Fig. 7. A 3D virtual scene modeled in Maya V. EXPERIMENTAL RESULTS As a proof of concept, a prototype of proposed distributed rendering framework is implemented and benchmarked. In this section, the benchmark results are presented. The prototype uses PipelineFX Qube [12] as render farm manager and PBS [13] for cluster resource management. It is implemented in Python language. The compute cluster (named Suqoor) at Texas A&M University at Qatar [14] has been used as a test environment. Out of ten available licenses for Qube, one is consumed by Qube Supervisor that manages all render jobs and remaining nine licenses are used by Qube workers. Each worker requires one license. A virtual scene (see Error! Reference source not found.) that comes with Autodesk Maya [3], a 3D modelling application, is used for the benchmarking. This scene has a 30-second long animation that comprises of 720 frames at 24 frames per second. For performance comparison, the same animation has been rendered (software rendering) on Suqoor and three other workstations as well. Software rendering refers to a rendering process that is unassisted by any specialized graphics hardware (such as graphics processing units or GPUs). Hardware rendering, utilizing the graphics hardware for rendering, cannot be performed on Suqoor due to lack of graphics hardware. However, performance of hardware rendering on workstations have also been compared to software rendering on Suqoor. Error! Reference source not found. summarizes the hardware specification of the workstations and single compute node of the compute cluster (Suqoor). Note that two of the workstations have the same hardware specification (Dell 690) Fig. 8. Average rendering time per frame (software rendering) Fig. 9. Average rendering time per frame versus number of cluster nodes (software rendering) Error! Reference source not found. shows the average rendering time per frame by using 1, 4, 7 and 9 compute nodes, respectively. Due to the limited number of available Qube 121 P a g e
6 (IJACSA) International Journal of Advanced r Science and Applications, Fig. 10. Number of frames rendered and accumulated time spent Fig. 11. nodes Average rendering times per frame with respect to compute Fig. 12. ing time of individual frames (software rendering) Fig. 13. Speed up with cluster (software rendering) licenses (10), the experiment could not be performed with more than 9 compute nodes. Error! Reference source not found. shows the number of rendered frames and aggregated rendering time spent by each compute node. It shows how the Qube supervisor has distributed the render jobs across compute nodes. With an average distribution of 80 frames per node, it is apparent that work distribution is almost equal. Error! Reference source not found. shows the average rendering times per frame with respect to each compute node. The variation in the result can be characterized to the variation in the 3D model complexity from different view angles. Error! Reference source not found. shows the rendering time of all frames. Remember that there are 720 frames in the animation. It is observed that workstations render the frames Fig. 14. Software rendering on cluster versus hardware rendering on Windows and Linux workstations one after the other by utilizing multiple CPU cores for singleframe rendering. On the other hand, Suqoor renders multiple frames on individual compute nodes. The numbers of active frames depend on the number of available cores on the compute nodes by assigning each core to an individual frame. Due to this, rendering time of individual frames is higher on Suqoor than other workstations. Error! Reference source not found. shows the performance speed-up with respect to other platforms. For instance, a cluster with nine compute nodes performs nearly 51 times better than the Dell 690 workstation with Windows, nearly 18 times better than the Dell 690 workstation with Linux, and nearly 9 times better than the HP Z800 workstation. Error! Reference source not found. compares the performance of software rendering on Suqoor to hardware 122 P a g e
7 (IJACSA) International Journal of Advanced r Science and Applications, rendering on other workstations. This plot shows how fast hardware rendering is on other workstations with respect to Suqoor. Remember that Suqoor does not support hardware rendering. It is observed that hardware rendering, especially on Linux workstations, is remarkably faster than software. The performance gap is drastically reduced by using more nodes on the cluster. VI. CONCLUSION This paper has presented a framework that creates a distributed rendering environment on a general-purpose compute cluster by using an existing render farm management application. It can be used to create the rendering environment similar to that of an animation studio in a university environment where users do not have exclusive access to the computers to perform time-consuming image renderings. The prototype of the proposed framework, using Qube! for render farm management and PBS for compute cluster management, has been implemented. The experimental results show that the compute cluster reduces the rendering time significantly in case of software rendering. Moreover, by using the existing render farm manager, the overall rendering workflow becomes efficient. VII. FUTURE WORK One thing, where compute cluster lacks behind is the hardware rendering that is due to the absence of GPUs in the compute nodes. Texas A&M University at Qatar is soon expected to acquire a larger cluster that will also have GPUs in several compute nodes. For the future work, the same experiment will be repeated on the new cluster and performance of the hardware rendering will be analyzed. The new cluster is expected to outperform the workstation by a large margin. References [1] Blender, [2] Autodesk 3D Studio Max, [3] Autodesk Maya, [4] Mentay ray, [5] V-Ray, [6] Gooding, S. Lee, Laura Arns, Preston Smith, and Jenett Tillotson. "Implementation of a distributed rendering environment for the TeraGrid." In Challenges of Large Applications in Distributed Environments, 2006 IEEE, pp IEEE, [7] Grid Computing, [8] Jing, Huajun, and Bin Gong. "The design and implementation of Farm Manager based on OpenPBS." In r-aided Industrial Design and Conceptual Design, CAID/CD th International Conference on, pp IEEE, [9] TeraGrid, [10] Man, [11] Chong, Anthony, Alexei Sourin, and Konstantin Levinski. "Grid-based computer animation rendering." In Proceedings of the 4th international conference on r graphics and interactive techniques in Australasia and Southeast Asia, pp ACM, [12] Pipeline FX Qube!, [13] PBS Guide, [14] High Performance Computing at Texas A&M University at Qatar, P a g e
GPU Renderfarm with Integrated Asset Management & Production System (AMPS)
GPU Renderfarm with Integrated Asset Management & Production System (AMPS) Tackling two main challenges in CG movie production Presenter: Dr. Chen Quan Multi-plAtform Game Innovation Centre (MAGIC), Nanyang
Visualization Cluster Getting Started
Visualization Cluster Getting Started Contents 1 Introduction to the Visualization Cluster... 1 2 Visualization Cluster hardware and software... 2 3 Remote visualization session through VNC... 2 4 Starting
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
LSKA 2010 Survey Report Job Scheduler
LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,
Content Distribution Management
Digitizing the Olympics was truly one of the most ambitious media projects in history, and we could not have done it without Signiant. We used Signiant CDM to automate 54 different workflows between 11
Questions and Answers
Autodesk Maya 2011 2011 QUESTIONS AND ANSWERS Questions and Answers Autodesk Maya 2011 software provides artists with an end-to-end creative workflow at an exceptional value. Contents 1. General Product
RenderStorm Cloud Render (Powered by Squidnet Software): Getting started.
Version 1.0 RenderStorm Cloud Render (Powered by Squidnet Software): Getting started. RenderStorm Cloud Render is an easy to use standalone application providing remote access, job submission, rendering,
Remote Graphical Visualization of Large Interactive Spatial Data
Remote Graphical Visualization of Large Interactive Spatial Data ComplexHPC Spring School 2011 International ComplexHPC Challenge Cristinel Mihai Mocan Computer Science Department Technical University
Network Administrator s Guide and Getting Started with Autodesk Ecotect Analysis
Autodesk Ecotect Analysis 2011 Network Administrator s Guide and Getting Started with Autodesk Ecotect Analysis This document describes how to install and activate Autodesk Ecotect Analysis 2011 software
Distributed Framework for Data Mining As a Service on Private Cloud
RESEARCH ARTICLE OPEN ACCESS Distributed Framework for Data Mining As a Service on Private Cloud Shraddha Masih *, Sanjay Tanwani** *Research Scholar & Associate Professor, School of Computer Science &
Installation Guide. (Version 2014.1) Midland Valley Exploration Ltd 144 West George Street Glasgow G2 2HG United Kingdom
Installation Guide (Version 2014.1) Midland Valley Exploration Ltd 144 West George Street Glasgow G2 2HG United Kingdom Tel: +44 (0) 141 3322681 Fax: +44 (0) 141 3326792 www.mve.com Table of Contents 1.
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
Automated deployment of virtualization-based research models of distributed computer systems
Automated deployment of virtualization-based research models of distributed computer systems Andrey Zenzinov Mechanics and mathematics department, Moscow State University Institute of mechanics, Moscow
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Windows Server 2008 R2 Essentials
Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution
LCMON Network Traffic Analysis
LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia [email protected] Abstract The Swinburne
Virtualization in Linux a Key Component for Cloud Computing
Virtualization in Linux a Key Component for Cloud Computing Harrison Carranza a and Aparicio Carranza a a Computer Engineering Technology New York City College of Technology of The City University of New
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
Manjrasoft Market Oriented Cloud Computing Platform
Manjrasoft Market Oriented Cloud Computing Platform Innovative Solutions for 3D Rendering Aneka is a market oriented Cloud development and management platform with rapid application development and workload
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
NVIDIA GRID OVERVIEW SERVER POWERED BY NVIDIA GRID. WHY GPUs FOR VIRTUAL DESKTOPS AND APPLICATIONS? WHAT IS A VIRTUAL DESKTOP?
NVIDIA GRID OVERVIEW Imagine if responsive Windows and rich multimedia experiences were available via virtual desktop infrastructure, even those with intensive graphics needs. NVIDIA makes this possible
Anar Manafov, GSI Darmstadt. GSI Palaver, 2010-03-09
Anar Manafov, GSI Darmstadt HEP Data Analysis Implement algorithm Run over data set Make improvements Typical HEP analysis needs a continuous algorithm refinement cycle 2 PROOF Storage File Catalog Query
SAM XFile. Trial Installation Guide Linux. Snell OD is in the process of being rebranded SAM XFile
SAM XFile Trial Installation Guide Linux Snell OD is in the process of being rebranded SAM XFile Version History Table 1: Version Table Date Version Released by Reason for Change 10/07/2014 1.0 Andy Gingell
Java Bit Torrent Client
Java Bit Torrent Client Hemapani Perera, Eran Chinthaka {hperera, echintha}@cs.indiana.edu Computer Science Department Indiana University Introduction World-wide-web, WWW, is designed to access and download
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform Joris Poort, President & CEO, Rescale, Inc. Ilea Graedel, Manager, Rescale, Inc. 1 Cloud HPC on the Rise 1.1 Background Engineering and science
Chapter 2: Getting Started
Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand
Efficient Load Balancing using VM Migration by QEMU-KVM
International Journal of Computer Science and Telecommunications [Volume 5, Issue 8, August 2014] 49 ISSN 2047-3338 Efficient Load Balancing using VM Migration by QEMU-KVM Sharang Telkikar 1, Shreyas Talele
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
HPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
HPSA Agent Characterization
HPSA Agent Characterization Product HP Server Automation (SA) Functional Area Managed Server Agent Release 9.0 Page 1 HPSA Agent Characterization Quick Links High-Level Agent Characterization Summary...
International Journal of Computer & Organization Trends Volume20 Number1 May 2015
Performance Analysis of Various Guest Operating Systems on Ubuntu 14.04 Prof. (Dr.) Viabhakar Pathak 1, Pramod Kumar Ram 2 1 Computer Science and Engineering, Arya College of Engineering, Jaipur, India.
Copyright 1999-2011 by Parallels Holdings, Ltd. All rights reserved.
Parallels Virtuozzo Containers 4.0 for Linux Readme Copyright 1999-2011 by Parallels Holdings, Ltd. All rights reserved. This document provides the first-priority information on Parallels Virtuozzo Containers
Maxwell Render 1.5 complete list of new and enhanced features
Maxwell Render 1.5 complete list of new and enhanced features Multiprocessor Maxwell Render can exploit all of the processors available on your system and can make them work simultaneously on the same
e-config Data Migration Guidelines Version 1.1 Author: e-config Team Owner: e-config Team
Data Migration was a one-time optional activity to migrate the underlying portfolio database in e- config and was only needed during the e-config Upgrade that was rolled out on January 21, 2013. This document
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data Amanda O Connor, Bryan Justice, and A. Thomas Harris IN52A. Big Data in the Geosciences:
DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
Media Exchange really puts the power in the hands of our creative users, enabling them to collaborate globally regardless of location and file size.
Media Exchange really puts the power in the hands of our creative users, enabling them to collaborate globally regardless of location and file size. Content Sharing Made Easy Media Exchange (MX) is a browser-based
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data
Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data Amanda O Connor, Bryan Justice, and A. Thomas Harris IN52A. Big Data in the Geosciences:
Table of Contents. P a g e 2
Solution Guide Balancing Graphics Performance, User Density & Concurrency with NVIDIA GRID Virtual GPU Technology (vgpu ) for Autodesk AutoCAD Power Users V1.0 P a g e 2 Table of Contents The GRID vgpu
Maintaining Non-Stop Services with Multi Layer Monitoring
Maintaining Non-Stop Services with Multi Layer Monitoring Lahav Savir System Architect and CEO of Emind Systems [email protected] www.emindsys.com The approach Non-stop applications can t leave on their
Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015
Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this
Mobile Cloud Computing T-110.5121 Open Source IaaS
Mobile Cloud Computing T-110.5121 Open Source IaaS Tommi Mäkelä, Otaniemi Evolution Mainframe Centralized computation and storage, thin clients Dedicated hardware, software, experienced staff High capital
VisIt Visualization Tool
The Center for Astrophysical Thermonuclear Flashes VisIt Visualization Tool Randy Hudson [email protected] Argonne National Laboratory Flash Center, University of Chicago An Advanced Simulation and Computing
LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
POSIX and Object Distributed Storage Systems
1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome
OpenEXR Image Viewing Software
OpenEXR Image Viewing Software Florian Kainz, Industrial Light & Magic updated 07/26/2007 This document describes two OpenEXR image viewing software programs, exrdisplay and playexr. It briefly explains
Scaling out a SharePoint Farm and Configuring Network Load Balancing on the Web Servers. Steve Smith Combined Knowledge MVP SharePoint Server
Scaling out a SharePoint Farm and Configuring Network Load Balancing on the Web Servers Steve Smith Combined Knowledge MVP SharePoint Server Scaling out a SharePoint Farm and Configuring Network Load Balancing
Optimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:
Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the
Network device management solution
iw Management Console Network device management solution iw MANAGEMENT CONSOLE Scalability. Reliability. Real-time communications. Productivity. Network efficiency. You demand it from your ERP systems
:Introducing Star-P. The Open Platform for Parallel Application Development. Yoel Jacobsen E&M Computing LTD [email protected]
:Introducing Star-P The Open Platform for Parallel Application Development Yoel Jacobsen E&M Computing LTD [email protected] The case for VHLLs Functional / applicative / very high-level languages allow
Cornell University Center for Advanced Computing
Cornell University Center for Advanced Computing David A. Lifka - [email protected] Director - Cornell University Center for Advanced Computing (CAC) Director Research Computing - Weill Cornell Medical
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
wu.cloud: Insights Gained from Operating a Private Cloud System
wu.cloud: Insights Gained from Operating a Private Cloud System Stefan Theußl, Institute for Statistics and Mathematics WU Wirtschaftsuniversität Wien March 23, 2011 1 / 14 Introduction In statistics we
CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content
Advances in Networks, Computing and Communications 6 92 CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content Abstract D.J.Moore and P.S.Dowland
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview
GeoImaging Accelerator Pansharp Test Results
GeoImaging Accelerator Pansharp Test Results Executive Summary After demonstrating the exceptional performance improvement in the orthorectification module (approximately fourteen-fold see GXL Ortho Performance
Streamline Computing Linux Cluster User Training. ( Nottingham University)
1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running
7.x Upgrade Instructions. 2015 Software Pursuits, Inc.
7.x Upgrade Instructions 2015 Table of Contents INTRODUCTION...2 SYSTEM REQUIREMENTS FOR SURESYNC 7...2 CONSIDERATIONS BEFORE UPGRADING...3 TERMINOLOGY CHANGES... 4 Relation Renamed to Job... 4 SPIAgent
Pipeline Orchestration for Test Automation using Extended Buildbot Architecture
Pipeline Orchestration for Test Automation using Extended Buildbot Architecture Sushant G.Gaikwad Department of Computer Science and engineering, Walchand College of Engineering, Sangli, India. M.A.Shah
4.1 Introduction 4.2 Explain the purpose of an operating system 4.2.1 Describe characteristics of modern operating systems Control Hardware Access
4.1 Introduction The operating system (OS) controls almost all functions on a computer. In this lecture, you will learn about the components, functions, and terminology related to the Windows 2000, Windows
Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform
Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4
Numerix CrossAsset XL and Windows HPC Server 2008 R2
Numerix CrossAsset XL and Windows HPC Server 2008 R2 Faster Performance for Valuation and Risk Management in Complex Derivative Portfolios Microsoft Corporation Published: February 2011 Abstract Numerix,
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015
CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015 1. Goals and Overview 1. In this MP you will design a Dynamic Load Balancer architecture for a Distributed System 2. You will
Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2
Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2 In the movie making, visual effects and 3D animation industrues meeting project and timing deadlines is critical to success. Poor quality
locuz.com HPC App Portal V2.0 DATASHEET
locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Accessing RCS IBM Console in Windows Using Linux Virtual Machine
Accessing RCS IBM Console in Windows Using Linux Virtual Machine For Graphics Simulation Experiment, Real Time Applications, ECSE 4760 Quan Wang Department of ECSE, Rensselaer Polytechnic Institute March,
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
Synergis Software 18 South 5 TH Street, Suite 100 Quakertown, PA 18951 +1 215.302.3000, 800.836.5440 www.synergissoftware.com version 20150330
Synergis Software 18 South 5 TH Street, Suite 100 Quakertown, PA 18951 +1 215.302.3000, 800.836.5440 www.synergissoftware.com version 20150330 CONTENTS Contents... 2 Overview... 2 Adept Server... 3 Adept
Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com
CHAPTER: Introduction Microsoft virtual architecture: Hyper-V 6.0 Manager Hyper-V Server (R1 & R2) Hyper-V Manager Hyper-V Server R1, Dell UPS Local Node Manager R2 Main Operating System: 2008Enterprise
TPCC-UVa: An Open-Source TPC-C Implementation for Parallel and Distributed Systems
TPCC-UVa: An Open-Source TPC-C Implementation for Parallel and Distributed Systems Diego R. Llanos and Belén Palop Universidad de Valladolid Departamento de Informática Valladolid, Spain {diego,b.palop}@infor.uva.es
Information Systems, Mansoura University, Mansoura, Egypt. Mansoura University, Mansoura, Egypt
On Harnessing Desktop Grids for Semi-Real Time 3D Rendering: A Case Study on POV-Ray A. M. Riad 1, A. E. Hassan 2, Q. F. Hassan 1 1 Department of Information Systems, Faculty of Computers and Information
An Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
CHAPTER FIVE RESULT ANALYSIS
CHAPTER FIVE RESULT ANALYSIS 5.1 Chapter Introduction 5.2 Discussion of Results 5.3 Performance Comparisons 5.4 Chapter Summary 61 5.1 Chapter Introduction This chapter outlines the results obtained from
CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES
CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES 1 MYOUNGJIN KIM, 2 CUI YUN, 3 SEUNGHO HAN, 4 HANKU LEE 1,2,3,4 Department of Internet & Multimedia Engineering,
FileNet System Manager Dashboard Help
FileNet System Manager Dashboard Help Release 3.5.0 June 2005 FileNet is a registered trademark of FileNet Corporation. All other products and brand names are trademarks or registered trademarks of their
THE FREEDOM TO CREATE, THE POWER TO RENDER
THE FREEDOM TO CREATE, THE POWER TO RENDER What is BOXX Innovative Integration? Innovative integration is a unique BOXX concept describing the process BOXX Labs and BOXX Engineering apply to leverage the
CMS Query Suite. CS4440 Project Proposal. Chris Baker Michael Cook Soumo Gorai
CMS Query Suite CS4440 Project Proposal Chris Baker Michael Cook Soumo Gorai I) Motivation Relational databases are great places to efficiently store large amounts of information. However, information
NETWRIX FILE SERVER CHANGE REPORTER
NETWRIX FILE SERVER CHANGE REPORTER ADMINISTRATOR S GUIDE Product Version: 3.3 April/2012. Legal Notice The information in this publication is furnished for information use only, and does not constitute
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
NinJo Project Status June 2004 DWD. NinJo Project Status. The NinJo Team at the last team meeting. Status of Software Development
NinJo Project Status June 2004 NinJo Project Status The NinJo Team at the last team meeting Overview User Input Layers Diagram based applications Servers Batch Hard- and Software-Benchmarks NinJo Hardware
Building an efficient and inexpensive PACS system. OsiriX - dcm4chee - JPEG2000
Building an efficient and inexpensive PACS system OsiriX - dcm4chee - JPEG2000 The latest version of OsiriX greatly improves compressed DICOM support, specifically JPEG2000 1 support. In this paper, we
In order to upload a VM you need to have a VM image in one of the following formats:
What is VM Upload? 1. VM Upload allows you to import your own VM and add it to your environment running on CloudShare. This provides a convenient way to upload VMs and appliances which were already built.
Performance Analysis of Web based Applications on Single and Multi Core Servers
Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department
