Interactive, photorealistic visualization
|
|
- Allen Francis
- 8 years ago
- Views:
Transcription
1 V ISU A L I Z A T I ON C ORNE R Editors: Cláudio T. Silva, csilva@cs.utah.edu Joel E. Tohline, tohline@lsu.edu AN EXPERIMENTAL DISTRIBUTED VISUALIZATION SYSTEM FOR PETASCALE COMPUTING By Jinghua Ge, Andrei Hutanu, Cornelius Toole, Robert Kooima, Imtiaz Hossain, and Gabrielle Allen The coming era of petascale computing and heterogeneous platforms calls for fundamental changes in perspective on how we design distributed visualization software. Interactive, photorealistic visualization is an important tool in scientific research because it lets scientists make visual discoveries in real-time. Modern computational science is increasingly producing largescale data sets using high-performance computing (HPC) resources that are remote from the user, making interactive visualization and analysis of such data a challenging task. One option to visualize this data is to replicate it on local resources for local interactive analysis. However, local data replication is cumbersome and in many cases the processing power of desktop visualization systems is inadequate. Existing parallel visualization systems such as ParaView 1 and VisIt 2 were designed to support large dataset visualization. However, existing systems typically impose long wait times during data loading and preprocessing, and require lengthy noninteractive processes for highquality visualization. When the data is located remotely and is tens of gigabytes and larger, there s a pressing need for satisfactory interactivity and performance. In contrast to existing systems, our approach uses powerful remote resources connected by highspeed networks to directly visualize remote data without replication. We also combine distributed resources in a single visualization application. Our distributed visualization system, eaviv, adopts cutting-edge technologies such as parallel rendering on graphics processing unit (GPU) clusters, progressive visualization, high-speed network I/O, and scalable video streaming. Our driving goal in designing the eaviv system is to keep pace with the increasing scale of simulations and data growth. The system supports distributed collaboration, letting multiple users in physically distributed locations interact with the same visualization to communicate their ideas and explore the datasets cooperatively. We ve demonstrated and tested eaviv across the Louisiana Optical Network Initiative (LONI) and TeraGrid. Here, we describe eaviv s design principles as well as its hardware and software components. Design Principles Developed by a small group of researchers at the Louisiana State University (LSU) Center for Computation & Technology (CCT), the eaviv distributed visualization system is a prototype petascale data analysis tool that addresses the root problems of next-generation visualization systems. We designed eaviv to support experimentation, and it s constantly undergoing optimization. Our work is guided by several key design principles; in brief, eaviv must be lightweight, so system optimization research can be continuously applied to implementation refinement; user-oriented and developed in collaboration with specific users within computational science communities; modular, so it s easy to configure for different use scenarios; progressive in adopting emerging infrastructure and technology in HPC, networking, GPU computing, and mobile devices; scalable, such that its overall scalability results from attentiveness to each individual component s scalability; and asynchronous, emphasizing interactivity by providing a progressive visualization pipeline so that software components couple asynchronously without blocking the pipelined data flow. These design principles represent the key factors that distinguish the eaviv system architecture from other visualization software. They are used as guidelines throughout our software development. 78 Copublished by the IEEE CS and the AIP /10/$ IEEE COMPUTING IN SCIENCE & ENGINEERING
2 The eaviv System We re actively developing eaviv as an open source research project, and the US National Science Foundation s Early-concept Grants for Exploratory Research (EAGER) program supports its experimental network testbed. We demonstrated eaviv at the 2008 International Conference for HPC, Networking, Storage and Analysis (SC08) and won first prize at the IEEE International Scalable Computing Challenge (SCALE2009). We presented eaviv at SC09 and at the Spring 2010 Internet2 Member Meeting. For these demonstrations, we used data from scientific user communities including chemical tomography, 3 numerical relativity (Cactus; cactuscode.org) and astrophysics (Enzo; Figure 1 shows the visualization pipeline s data flow from the supercomputer that runs simulation and stores the data to the data servers that transfer the data to the renderer cluster, which streams images to the displays. Hardware components The system s hardware components including HPC clusters, highresolution displays, and interaction devices are distributed, specialized resources connected by high-speed networks. Network testbed. Our work addresses fundamental issues in distributed visualization design and implementation, where network services represent a first-class resource. To support this, we built a testbed connecting resources at CCT, LSU, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), and the Data server Data server Laboratory of Advanced Networking Technologies (SITOLA) at Masaryk University in the Czech Republic. The Internet2 Interoperable Ondemand Network (ION) provides wide-area connectivity to support dynamic network circuit services. Applications request and reserve point-topoint circuits between sites as needed using automated control software. Computation and display. The primary resources connected by our network testbed include contributions from CCT, LONI, NCSA, TeraGrid, and SITOLA. GPU and rendering. Spider, LSU s distributed computing testbed, consists of eight rendering nodes and four I/O nodes. Rendering nodes are connected pair-wise to four Nvidia Tesla S1070 graphics units. The NCSA s accelerator cluster ( iacat.uiuc.edu/resources/cluster) is a 40-node cluster that combines both GPU and field-programmable gate array technology. NCSA s Lincoln TeraGrid cluster ( illinois.edu/userinfo/resources/ Hardware/Intel64TeslaCluster) Rendering Input device Tile display Figure 1. The eaviv visualization pipeline connects data servers, GPU clusters, and displays. The data servers and rendering GPU clusters are physically distributed resources remote to the local user. They re connected to the local displays by highspeed networks. Large-scale datasets are thus processed by remote resources and the resulting image streamed to local users. has 192 nodes with 96 Nvidia Tesla S1070 graphics units. Storage. SITOLA provides a Sun X4540 server with eight AMD Opteron 2356 cores, 64 GBytes memory, and 48 Tbytes storage. Running Linux with disks configured in RAID-0, it allows up to 1.7 GBytes per second of sustained reading and 1 Gbps of sustained writing in real time. NCSA s mass storage system (MSS; Data/MSS) is a production system for permanent data storage and consists of a high-performance system running NCSA s UberFS. MSS is a parallel, automatically backed-up storage system accessible via SSH, FTP, and GridFTP. Display. LSU has a Sony SXRD 4K projector (4,096 2,160 pixels), and SITOLA has two tiled displays, one 6 4 with total resolution of 55 Megapixels and six display nodes, and the other 2 2 with 9 Megapixels. Software Components Our system s main software components are the distributed data server and parallel renderer. The eaviv system also supports specialized tangible remote SEPTEMBER/OCTOBER
3 VISUALIZ ATION CORNER Figure 2. Four rendering nodes are used to process the data in parallel. Each node renders one quarter of the data into a local image and contributes partially to the final composited image following an overlap-based compositing schedule. The global assembled view has a resolution of Using this method, Pcaster can produce high-resolution images scalably. Data Courtesy of Britton Smith, University of Colorado. interaction and uses Scalable Adaptive Graphics Environment (SAGE)5 highperformance parallel image streaming software. Data server. Data servers can be ei- ther local or remote. We use the local data server for simple scenarios when data is readily accessible and local disk speed is sufficient or when transport performance isn t critical. We use remote data servers in two scenarios: when the data isn t available locally, or when local I/O performance is insufficient. In this second scenario, network throughput can exceed that of the local disk system, and we can increase data transfer speed by distributing the data on the network and loading it in the remote machines main memory. High-performance data transmission over wide-area networks is difficult to achieve. One of the main factors affecting data transport performance is the network transport protocol; an unsuitable protocol on a wide-area network can have a poor performance. In some situations, the standard Internet TCP protocol might sustain only a few Mbps throughput on a 10 Gbps dedicated network connection. We ve designed our system to support highspeed transport protocols, including Uniform Data Transfer,4 allowing us to achieve high network throughput 80 &,6( 9L]&RUQ LQGG on long-distance and high-capacity network links. In addition, eaviv supports parallelism: each renderer node can access data from multiple data servers. This increases data throughput, as well as increasing the amount of data that the system can access. Parallel volume renderer. Our experimental data sets are large-scale, time-varying 3D volumes. This is a common data model in the computational science community and because 3D data size grows more quickly than 2D data as a problem s scale increases the format provides an interesting challenge for our system. We implemented the ray-casting parallel volume renderer called Pcaster as the rendering component to demonstrate the distributed pipeline s workflow. Compared to parallel volume renderers in existing software such as Avizo ( EnSight (www. ensight.com),visit or ParaView Pcaster is a purely GPU-based volume renderer and image compositor supporting high-resolution rendering. Pcaster asynchronously couples with parallel data servers for network-streamed data input. We ve tested Pcaster with data sets up to 64 Gbytes per timestep and achieved interactive frame rates (5 to 10 frames per second) using 32 Nvidia Tesla S1070 GPUs on NCSA s Lincoln cluster to produce render images of 1,024 1,024 resolution. Figure 2 shows an image with contributions from four renderers. Each node renders one quarter of the volume data set and contributes a local composited image. The global assembled view has a resolution of Using this method, Pcaster can produce high-resolution image scalable with the parallel rendering. For example, when we split a 40,963-voxel volume into 10,243 voxel subvolumes and render it using 64 rendering nodes, each renderer produces a 1,024 1,024 image, and the final composition has a resolution of 4,096 4,096. For multi-gbyte datasets, data loading can take a long time. Rather than blocking on I/O, with the renderer waiting for data to load, Pcaster uses a pool-and-show mechanism. That is, rendering proceeds at interactive rates while data is continuously streamed into the renderer s data pool. This produces a progressive visualization output for the user. Figure 3 shows an example of an intermediate rendering that demonstrates the progressive data pooling: 32 nodes are rendering the partial data in the pool. Pcaster s progressive rendering mechanism not only gives users more prompt interaction with the data, but also provides visual performance metrics, including data preparation time and network transfer speed. Users can easily see any system bottlenecks and adapt their strategy as needed. Image streaming and remote tangible interaction. To support ultra-high- resolution distributed visualization, eaviv integrates the SAGE image streaming software for high-performance parallel image delivery. Developed at the University of Illinois Chicago s COMPUTING IN SCIENCE & ENGINEERING 30
4 Electronic Visualization Laboratory, SAGE can stream remote application imagery of potentially hundreds of megapixels of contiguous display resolution to user s local display. Also, SAGE bridges can be used to multicast the video stream from the application to multiple users, thus supporting collaborative scientific visualization environments. SAGE also supports the standard Internet UDP protocol; although UDP doesn t provide data transmission reliability, it s more suitable than TCP for high-performance video streaming. To encourage users to embrace the collaboration environment, eaviv provides a tangible interaction mechanism for more natural engagement with the visualization. In the tangible interaction approach, physical objects control and represent computational media.6 Figure 4 shows the tagged physical tokens (in this case, RFID-tagged cards) that we used in eaviv experiments. These tangible user interfaces (TUIs) are bound directly to visualization parameters within Pcaster, thus steering the interaction with the remote rendering. These devices provide immediate tactile feedback, which is useful for fine-grained control of remote objects during network latency. Using SAGE and tangible interaction devices enables collaborative visualization, letting multiple users at different locations simultaneously interact and collaborate via the visualization system. Use Cases When run with a local data server and a single renderer, eaviv can be used as a desktop tool for small-scale local data. When run with remote data servers, dedicated networks, and parallel rendering, it can handle large-scale distributed data. The eaviv system s configurable architecture thus permits SEPTEMBER/OCTOBER 2010 &,6( 9L]&RUQ LQGG a variety of usage scenarios; we now describe two current examples. Classroom Visualizations At the LSU Visualization Services Center, we re in the process of applying eaviv to a computer lab and classroom environment. One target application will be a human anatomy class that will visualize data from the Visible Human Project ( nih.gov/research/visible/visible_human. html). The Visible Human Project was established in 1989 to build a digital image library of volumetric data representing complete normal adult male and female anatomy (completed in 1994 and 1995, respectively, and rescanned in 2000 at a higher resolution). The project has generated two datasets of 65 and 40 Gbytes, respectively. For certain body sections, computed axial tomography scans, or CT scans, and magnetic resonance imaging (MRI) data are also available. Although dataset sections have been in use on individual lab computers, interactive full-scale visualization of complete datasets requires parallel rendering. The study of anatomy is highly visual. Learning where individual regions lie with respect to others and where they fit in the overall structure helps greatly in the learning process. In the classroom, both instructor and students will have control over separate instances of the visualization application. The instructor will use a large projection wall for teaching, and students will each have a desktop computer where they can zoom in to the dataset for in-depth exploration. LSU s Spider will run eaviv and send the visualization image to each desktop computer in the lab classroom. This classroom use case is guiding our future development of eaviv s user interaction modes. In addition, in the Figure 3. Progressive parallel rendering during the receipt of data; 32 nodes are concurrently rendering the data partially available in the data pool. The continuous update of intermediate rendering images clearly demonstrates the rate at which data are being transferred through the network and offers a peek inside the dataset. Data Courtesy of Erik Schnetter, Louisiana State University (LSU) Center for Computation & Technology. Figure 4. Parameter interaction tray. These tangible user interfaces (TUIs) are bound directly to visualization parameters within Pcaster, thus steering the interaction with the remote rendering. Instead of a single user operating with a keyboard and mouse, collaborating users can use multiple tangible devices, providing better interaction and cooperation. collaboration mode, in which instructors and students interact with the same visualization, we ll develop an independent steering mode that will let each student interact with the data individually. Distributed Execution on Production Resources To demonstrate eaviv s full distributed capability, we consider a complex black-hole data visualization scenario 81 30
5 V ISU A L I Z A T I ON C ORNE R in which people in their individual offices at LSU use the following resources to visualize the remote data: a SITOLA data server, which has datasets of hundreds of Gbytes or Tbytes; and NCSA s Lincoln, the most powerful rendering cluster available to our project. The common practice of production HPC resource usage involves submitting a job into a scheduling queue and waiting for it to be executed. The eaviv system requires separate resource allocation of at least three components: the networks, the data servers, and the GPU rendering cluster. The three allocations must coincide in time for the application to execute. The only solution that can enable this execution is advanced resource reservation to allocate all resources simultaneously. In our testbed, the Lincoln GPU cluster supports advance reservation; the network circuit can be reserved through the LONI ION website ( ldcn1.sys.loni.org:8443/oscars). Experimental resources at SITOLA and CCT aren t managed by a scheduler. As this scenario shows, running a system that enables scientific investigation of large datasets requires collaboration among multiple resource providers. The eaviv system s highperformance pipeline architecture supports cutting-edge emerging infrastructure and serves as a data analysis prototype for the nextgeneration of scientific investigation. It s our hope that our work will benefit the visualization tools developer and the research community, both in using our software components and following our software design principles to improve existing tools. Acknowledgments The eaviv project is funded by the US National Science Foundation awards Louisiana RIICyberTools # , Eager network testbed # , and Viz Tangibles # The initial eaviv research is funded by the Center for Computation & Technology at LSU. References 1. A. Cedilnik et al., Remote Large Data Visualization in the ParaView Framework, Proc. Eurographics Parallel Graphics and Visualization, 2006, ACM Press, 2006, pp H. Childs et al., A Contract-Based System For Large Data Visualization, Proc. 16th IEEE Visualization, IEEE CS Press, 2005, pp H. Kyungmin, H.A. Harriett, and L.C. Butler, Burning Issues in Tomography Analysis, Computing in Science & Eng., vol. 10, no. 2, 2008, pp G. Yunhong and R.L. Grossman, UDT: Udp-Based Data Transfer for High-Speed Wide Area Networks, Computing Networks, vol. 51, no. 7, 2007, pp L. Renambot et al., Enabling High Resolution Collaborative Visualization in Display Rich Virtual Organizations, Future Generation Computing Systems, vol. 25, no. 2, 2009, pp H. Ishii and B. Ullmer, Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms, Proc. SIGCHI Conf. Human Factors in Computing Systems, ACM Press, 1997, pp Jinghua Ge is a visualization consultant at the Center for Computation & Technology, Louisiana State University. Her research interests include computer graphics, scientific visualization, parallel rendering with GPU clusters, and distributed visualization pipeline. Ge has a PhD in computer science from the University of Illinois at Chicago. Contact her at jinghuage@cct.lsu.edu. Andrei Hutanu is an IT consultant at the Center for Computation & Technology, Louisiana State University. His research interests include software system design and engineering, distributed computing, and high-speed network applications with an emphasis on distributed visualization, data management, and collaborative applications. Hutanu has a PhD in computer science from Louisiana State University. Contact him at ahutanu@cct.lsu.edu. Cornelius Toole is PhD candidate in the Department of Computer Science at Louisiana State University. His research interests include human computer interaction, scientific visualization, and distributed computing. Toole has an MS in computer science from Jackson State University. Contact him at corntoole@cct.lsu.edu. Robert Kooima is a post-doctoral researcher at the Center for Computation & Technology and an adjunct professor in the Department of Computer Science at Louisiana State University. His research interests include real-time 3D computer graphics and parallel rendering for scientific visualization. Kooima has a PhD in computer science from University of Illinois at Chicago. Contact him at rlk@lsu.edu. Imtiaz Hossain is the manager of the Visualization Services Center of Information Technology Services at Louisiana State University. His research interests include image processing, pattern recognition, and data visualization. Hossain has an MS in mathematics from Louisiana State University. He is a member of the ACM, IEEE, and SPIE. Contact him at imtiaz@lsu.edu. Gabrielle Allen is an associate professor of computer science at Louisiana State University. Her research interests include computational frameworks, high-performance computing, high-speed networks, and computational astrophysics and coastal modeling. Allen has a PhD in computational astrophysics from Cardiff University. Contact her at gallen@cct. lsu.edu. 82 COMPUTING IN SCIENCE & ENGINEERING
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
More informationJournal of Chemical and Pharmaceutical Research, 2013, 5(12):118-122. Research Article. An independence display platform using multiple media streams
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2013, 5(12):118-122 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 An independence display platform using multiple
More informationHPC & Visualization. Visualization and High-Performance Computing
HPC & Visualization Visualization and High-Performance Computing Visualization is a critical step in gaining in-depth insight into research problems, empowering understanding that is not possible with
More informationData Centric Interactive Visualization of Very Large Data
Data Centric Interactive Visualization of Very Large Data Bruce D Amora, Senior Technical Staff Gordon Fossum, Advisory Engineer IBM T.J. Watson Research/Data Centric Systems #OpenPOWERSummit Data Centric
More informationThe Collaboratorium & Remote Visualization at SARA. Tijs de Kler SARA Visualization Group (tijs.dekler@sara.nl)
The Collaboratorium & Remote Visualization at SARA Tijs de Kler SARA Visualization Group (tijs.dekler@sara.nl) The Collaboratorium! Goals Support collaboration, presentations and visualization for the
More informationPetascale Visualization: Approaches and Initial Results
Petascale Visualization: Approaches and Initial Results James Ahrens Li-Ta Lo, Boonthanome Nouanesengsy, John Patchett, Allen McPherson Los Alamos National Laboratory LA-UR- 08-07337 Operated by Los Alamos
More informationGTC Presentation March 19, 2013. Copyright 2012 Penguin Computing, Inc. All rights reserved
GTC Presentation March 19, 2013 Copyright 2012 Penguin Computing, Inc. All rights reserved Session S3552 Room 113 S3552 - Using Tesla GPUs, Reality Server and Penguin Computing's Cloud for Visualizing
More informationCloud computing doesn t yet have a
The Case for Cloud Computing Robert L. Grossman University of Illinois at Chicago and Open Data Group To understand clouds and cloud computing, we must first understand the two different types of clouds.
More informationData Management/Visualization on the Grid at PPPL. Scott A. Klasky Stephane Ethier Ravi Samtaney
Data Management/Visualization on the Grid at PPPL Scott A. Klasky Stephane Ethier Ravi Samtaney The Problem Simulations at NERSC generate GB s TB s of data. The transfer time for practical visualization
More informationBig Data Management in the Clouds and HPC Systems
Big Data Management in the Clouds and HPC Systems Hemera Final Evaluation Paris 17 th December 2014 Shadi Ibrahim Shadi.ibrahim@inria.fr Era of Big Data! Source: CNRS Magazine 2013 2 Era of Big Data! Source:
More informationIBM Deep Computing Visualization Offering
P - 271 IBM Deep Computing Visualization Offering Parijat Sharma, Infrastructure Solution Architect, IBM India Pvt Ltd. email: parijatsharma@in.ibm.com Summary Deep Computing Visualization in Oil & Gas
More informationRemote Graphical Visualization of Large Interactive Spatial Data
Remote Graphical Visualization of Large Interactive Spatial Data ComplexHPC Spring School 2011 International ComplexHPC Challenge Cristinel Mihai Mocan Computer Science Department Technical University
More informationThe Sustainability of High Performance Computing at Louisiana State University
The Sustainability of High Performance Computing at Louisiana State University Honggao Liu Director of High Performance Computing, Louisiana State University, Baton Rouge, LA 70803 Introduction: Computation
More informationwww.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING VISUALISATION GPU COMPUTING
www.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING GPU COMPUTING VISUALISATION XENON Accelerating Exploration Mineral, oil and gas exploration is an expensive and challenging
More informationStream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
More informationRemote & Collaborative Visualization. Texas Advanced Compu1ng Center
Remote & Collaborative Visualization Texas Advanced Compu1ng Center So6ware Requirements SSH client VNC client Recommended: TigerVNC http://sourceforge.net/projects/tigervnc/files/ Web browser with Java
More informationHadoop Cluster Applications
Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday
More informationHow To Create A Flood Simulator For A Web Browser (For Free)
Interactive Web-based Flood Simulation System for Realistic Experiments of Flooding and Flood Damage Ibrahim Demir Big Data We are generating data on a petabyte scale through observations and modeling
More informationScaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
More informationAxceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2
Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2 In the movie making, visual effects and 3D animation industrues meeting project and timing deadlines is critical to success. Poor quality
More informationDesigning a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
More informationA Study on Service Oriented Network Virtualization convergence of Cloud Computing
A Study on Service Oriented Network Virtualization convergence of Cloud Computing 1 Kajjam Vinay Kumar, 2 SANTHOSH BODDUPALLI 1 Scholar(M.Tech),Department of Computer Science Engineering, Brilliant Institute
More informationHow To Build A Cloud Computer
Introducing the Singlechip Cloud Computer Exploring the Future of Many-core Processors White Paper Intel Labs Jim Held Intel Fellow, Intel Labs Director, Tera-scale Computing Research Sean Koehl Technology
More informationQuantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
More informationParallel Visualization for GIS Applications
Parallel Visualization for GIS Applications Alexandre Sorokine, Jamison Daniel, Cheng Liu Oak Ridge National Laboratory, Geographic Information Science & Technology, PO Box 2008 MS 6017, Oak Ridge National
More informationParallel Large-Scale Visualization
Parallel Large-Scale Visualization Aaron Birkland Cornell Center for Advanced Computing Data Analysis on Ranger January 2012 Parallel Visualization Why? Performance Processing may be too slow on one CPU
More informationParallel Analysis and Visualization on Cray Compute Node Linux
Parallel Analysis and Visualization on Cray Compute Node Linux David Pugmire, Oak Ridge National Laboratory and Hank Childs, Lawrence Livermore National Laboratory and Sean Ahern, Oak Ridge National Laboratory
More informationEMC ISILON AND ELEMENTAL SERVER
Configuration Guide EMC ISILON AND ELEMENTAL SERVER Configuration Guide for EMC Isilon Scale-Out NAS and Elemental Server v1.9 EMC Solutions Group Abstract EMC Isilon and Elemental provide best-in-class,
More informationUsing GPUs in the Cloud for Scalable HPC in Engineering and Manufacturing March 26, 2014
Using GPUs in the Cloud for Scalable HPC in Engineering and Manufacturing March 26, 2014 David Pellerin, Business Development Principal Amazon Web Services David Hinz, Director Cloud and HPC Solutions
More informationMake the Most of Big Data to Drive Innovation Through Reseach
White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability
More informationNVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist
NVIDIA CUDA Software and GPU Parallel Computing Architecture David B. Kirk, Chief Scientist Outline Applications of GPU Computing CUDA Programming Model Overview Programming in CUDA The Basics How to Get
More informationAmazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
More informationNext-Generation Networking for Science
Next-Generation Networking for Science ASCAC Presentation March 23, 2011 Program Managers Richard Carlson Thomas Ndousse Presentation
More informationScalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
More informationHPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
More informationPanasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory
Customer Success Story Los Alamos National Laboratory Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory June 2010 Highlights First Petaflop Supercomputer
More informationDavid Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM
More informationHPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
More informationStreamStorage: High-throughput and Scalable Storage Technology for Streaming Data
: High-throughput and Scalable Storage Technology for Streaming Data Munenori Maeda Toshihiro Ozawa Real-time analytical processing (RTAP) of vast amounts of time-series data from sensors, server logs,
More informationCisco WAAS for Isilon IQ
Cisco WAAS for Isilon IQ Integrating Cisco WAAS with Isilon IQ Clustered Storage to Enable the Next-Generation Data Center An Isilon Systems/Cisco Systems Whitepaper January 2008 1 Table of Contents 1.
More informationTaking Big Data to the Cloud. Enabling cloud computing & storage for big data applications with on-demand, high-speed transport WHITE PAPER
Taking Big Data to the Cloud WHITE PAPER TABLE OF CONTENTS Introduction 2 The Cloud Promise 3 The Big Data Challenge 3 Aspera Solution 4 Delivering on the Promise 4 HIGHLIGHTS Challenges Transporting large
More informationThe Big Data methodology in computer vision systems
The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of
More informationChapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
More informationA REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, sborkar95@gmail.com Assistant Professor, Information
More informationSystem Models for Distributed and Cloud Computing
System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems
More informationNVIDIA IndeX. Whitepaper. Document version 1.0 3 June 2013
NVIDIA IndeX Whitepaper Document version 1.0 3 June 2013 NVIDIA Advanced Rendering Center Fasanenstraße 81 10623 Berlin phone +49.30.315.99.70 fax +49.30.315.99.733 arc-office@nvidia.com Copyright Information
More informationHPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
More informationPetaShare: Enabling Data Intensive Science
PetaShare: Enabling Data Intensive Science Tevfik Kosar Center for Computation & Technology Louisiana State University June 25, 2007 The Data Deluge Scientific data outpaced Moore s Law! 2 The Lambda Blast
More informationDeploying 10/40G InfiniBand Applications over the WAN
Deploying 10/40G InfiniBand Applications over the WAN Eric Dube (eric@baymicrosystems.com) Senior Product Manager of Systems November 2011 Overview About Bay Founded in 2000 to provide high performance
More informationPerformance Management for Next- Generation Networks
Performance Management for Next- Generation Networks Definition Performance management for next-generation networks consists of two components. The first is a set of functions that evaluates and reports
More informationBehind every great artist is an extraordinary pipeline
AUTODESK Integrated Creative Environment Behind every great artist is an extraordinary pipeline 2006 Universal Pictures; image courtesy Rhythm & Hues While working on The Fast and the Furious - Tokyo Drift,
More informationThe UC Berkeley-LBL HIPPI Networking Environment
The UC Berkeley-LBL HIPPI Networking Environment Bruce A. Mah bmah@tenet.berkeley.edu The Tenet Group Computer Science Division University of California at Berkeley and International Computer Science Institute
More informationKeystone Image Management System
Image management solutions for satellite and airborne sensors Overview The Keystone Image Management System offers solutions that archive, catalogue, process and deliver digital images from a vast number
More informationEnterprise HPC & Cloud Computing for Engineering Simulation. Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc.
Enterprise HPC & Cloud Computing for Engineering Simulation Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc. Historical Perspective Evolution of Computing for Simulation Pendulum swing: Centralized
More informationData Semantics Aware Cloud for High Performance Analytics
Data Semantics Aware Cloud for High Performance Analytics Microsoft Future Cloud Workshop 2011 June 2nd 2011, Prof. Jun Wang, Computer Architecture and Storage System Laboratory (CASS) Acknowledgement
More informationHigh Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand Hari Subramoni *, Ping Lai *, Raj Kettimuthu **, Dhabaleswar. K. (DK) Panda * * Computer Science and Engineering Department
More informationEqualizer. Parallel OpenGL Application Framework. Stefan Eilemann, Eyescale Software GmbH
Equalizer Parallel OpenGL Application Framework Stefan Eilemann, Eyescale Software GmbH Outline Overview High-Performance Visualization Equalizer Competitive Environment Equalizer Features Scalability
More informationFibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development
Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems
More informationOptimizing GPU-based application performance for the HP for the HP ProLiant SL390s G7 server
Optimizing GPU-based application performance for the HP for the HP ProLiant SL390s G7 server Technology brief Introduction... 2 GPU-based computing... 2 ProLiant SL390s GPU-enabled architecture... 2 Optimizing
More informationCLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES
CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES 1 MYOUNGJIN KIM, 2 CUI YUN, 3 SEUNGHO HAN, 4 HANKU LEE 1,2,3,4 Department of Internet & Multimedia Engineering,
More informationPacket-based Network Traffic Monitoring and Analysis with GPUs
Packet-based Network Traffic Monitoring and Analysis with GPUs Wenji Wu, Phil DeMar wenji@fnal.gov, demar@fnal.gov GPU Technology Conference 2014 March 24-27, 2014 SAN JOSE, CALIFORNIA Background Main
More informationMichał Jankowski Maciej Brzeźniak PSNC
National Data Storage - architecture and mechanisms Michał Jankowski Maciej Brzeźniak PSNC Introduction Assumptions Architecture Main components Deployment Use case Agenda Data storage: The problem needs
More informationLA-UR- Title: Author(s): Intended for: Approved for public release; distribution is unlimited.
LA-UR- Approved for public release; distribution is unlimited. Title: Author(s): Intended for: Los Alamos National Laboratory, an affirmative action/equal opportunity employer, is operated by the Los Alamos
More informationBig data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
More informationData Mining for Data Cloud and Compute Cloud
Data Mining for Data Cloud and Compute Cloud Prof. Uzma Ali 1, Prof. Punam Khandar 2 Assistant Professor, Dept. Of Computer Application, SRCOEM, Nagpur, India 1 Assistant Professor, Dept. Of Computer Application,
More informationEnabling Real-Time Sharing and Synchronization over the WAN
Solace message routers have been optimized to very efficiently distribute large amounts of data over wide area networks, enabling truly game-changing performance by eliminating many of the constraints
More informationLatency in High Performance Trading Systems Feb 2010
Latency in High Performance Trading Systems Feb 2010 Stephen Gibbs Automated Trading Group Overview Review the architecture of a typical automated trading system Review the major sources of latency, many
More informationThe Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
More informationWINDOWS AZURE AND WINDOWS HPC SERVER
David Chappell March 2012 WINDOWS AZURE AND WINDOWS HPC SERVER HIGH-PERFORMANCE COMPUTING IN THE CLOUD Sponsored by Microsoft Corporation Copyright 2012 Chappell & Associates Contents High-Performance
More informationSTORNEXT PRO SOLUTIONS. StorNext Pro Solutions
STORNEXT PRO SOLUTIONS StorNext Pro Solutions StorNext PRO SOLUTIONS StorNext Pro Solutions offer post-production and broadcast professionals the fastest, easiest, and most complete high-performance shared
More informationScala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
More informationRevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
More informationManaging Large Imagery Databases via the Web
'Photogrammetric Week 01' D. Fritsch & R. Spiller, Eds. Wichmann Verlag, Heidelberg 2001. Meyer 309 Managing Large Imagery Databases via the Web UWE MEYER, Dortmund ABSTRACT The terramapserver system is
More informationVMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
More informationVisualization @ SUN. Linda Fellingham, Ph. D Manager, Visualization and Graphics Sun Microsystems
Visualization @ SUN Shared Visualization 1.1 Software Scalable Visualization 1.1 Solutions Linda Fellingham, Ph. D Manager, Visualization and Graphics Sun Microsystems The Data Tsunami Visualization is
More informationBoas Betzler. Planet. Globally Distributed IaaS Platform Examples AWS and SoftLayer. November 9, 2015. 20014 IBM Corporation
Boas Betzler Cloud IBM Distinguished Computing Engineer for a Smarter Planet Globally Distributed IaaS Platform Examples AWS and SoftLayer November 9, 2015 20014 IBM Corporation Building Data Centers The
More informationHow To Share Rendering Load In A Computer Graphics System
Bottlenecks in Distributed Real-Time Visualization of Huge Data on Heterogeneous Systems Gökçe Yıldırım Kalkan Simsoft Bilg. Tekn. Ltd. Şti. Ankara, Turkey Email: gokce@simsoft.com.tr Veysi İşler Dept.
More informationPARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
More informationMOAB CON 2009 RMSC Case Study Providing Supercomputing Platforms as a Service (SPaaS)
Providing Supercomputing Platforms as a Service (SPaaS) Phillip J. Curtiss, Ph.D. Rocky Mountain Supercomputing Centers, Inc. InfoMine of the Rockies, Inc. RMSC Core Mission Stimulate and Foster Economic
More informationTableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
More informationPRODUCTIVITY ESTIMATION OF UNIX OPERATING SYSTEM
Computer Modelling & New Technologies, 2002, Volume 6, No.1, 62-68 Transport and Telecommunication Institute, Lomonosov Str.1, Riga, LV-1019, Latvia STATISTICS AND RELIABILITY PRODUCTIVITY ESTIMATION OF
More informationImplementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
More informationIT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
More informationCTX OVERVIEW. Ucentrik CTX
CTX FACT SHEET CTX OVERVIEW CTX SDK API enables Independent Developers, VAR s & Systems Integrators and Enterprise Developer Teams to freely and openly integrate real-time audio, video and collaboration
More informationUnified Computing Systems
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified
More informationDistributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
More informationWanVelocity. WAN Optimization & Acceleration
WanVelocity D A T A S H E E T WAN Optimization & Acceleration WanVelocity significantly accelerates applications while reducing bandwidth costs using a combination of application acceleration, network
More informationLustre Networking BY PETER J. BRAAM
Lustre Networking BY PETER J. BRAAM A WHITE PAPER FROM CLUSTER FILE SYSTEMS, INC. APRIL 2007 Audience Architects of HPC clusters Abstract This paper provides architects of HPC clusters with information
More informationThe Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets
The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets!! Large data collections appear in many scientific domains like climate studies.!! Users and
More information#jenkinsconf. Jenkins as a Scientific Data and Image Processing Platform. Jenkins User Conference Boston #jenkinsconf
Jenkins as a Scientific Data and Image Processing Platform Ioannis K. Moutsatsos, Ph.D., M.SE. Novartis Institutes for Biomedical Research www.novartis.com June 18, 2014 #jenkinsconf Life Sciences are
More informationPACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.
PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD Natasha Balac, Ph.D. Brief History of SDSC 1985-1997: NSF national supercomputer center; managed by General Atomics
More informationSoftware. Enabling Technologies for the 3D Clouds. Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager
Software Enabling Technologies for the 3D Clouds Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager What is a 3D Cloud? "Cloud computing is a model for enabling convenient, on-demand network access
More informationPRODUCTS & TECHNOLOGY
PRODUCTS & TECHNOLOGY DATA CENTER CLASS WAN OPTIMIZATION Today s major IT initiatives all have one thing in common: they require a well performing Wide Area Network (WAN). However, many enterprise WANs
More informationRecognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework Vidya Dhondiba Jadhav, Harshada Jayant Nazirkar, Sneha Manik Idekar Dept. of Information Technology, JSPM s BSIOTR (W),
More informationLSKA 2010 Survey Report Job Scheduler
LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,
More informationA New Data Visualization and Analysis Tool
Title: A New Data Visualization and Analysis Tool Author: Kern Date: 22 February 2013 NRAO Doc. #: Version: 1.0 A New Data Visualization and Analysis Tool PREPARED BY ORGANIZATION DATE Jeff Kern NRAO 22
More informationShared Display Wall Based Collaboration Environment in the Control Room of the DIII-D National Fusion Facility
Shared Display Wall Based Collaboration Environment in the G. Abla a, G. Wallace b, D.P. Schissel a, S.M. Flanagan a, Q. Peng a, and J.R. Burruss a a General Atomics, P.O. Box 85608, San Diego, California
More informationVisualization and Data Analysis
Working Group Outbrief Visualization and Data Analysis James Ahrens, David Rogers, Becky Springmeyer Eric Brugger, Cyrus Harrison, Laura Monroe, Dino Pavlakos Scott Klasky, Kwan-Liu Ma, Hank Childs LLNL-PRES-481881
More information