Object Database Scalability for Scientific Workloads

Size: px
Start display at page:

Download "Object Database Scalability for Scientific Workloads"

Transcription

1 Object Database Scalability for Scientific Workloads Technical Report Julian J. Bunn Koen Holtman, Harvey B. Newman HEP, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA CERN EP-Division, CH , Geneva 23, Switzerland We describe the PetaByte-scale computing challenges posed by the next generation of particle physics experiments, due to start operation in The computing models adopted by the experiments call for systems capable of handling sustained data acquisition rates of at least 100 MBytes/second into an Object Database, which will have to handle several PetaBytes of accumulated data per year. The systems will be used to schedule CPU intensive reconstruction and analysis tasks on the highly complex physics Object data which need then be served to clients located at universities and laboratories worldwide. We report on measurements with a prototype system that makes use of a 256 CPU HP Exemplar X Class machine running the Objectivity/DB database. Our results show excellent scalability for up to 240 simultaneous database clients, and aggregate I/O rates exceeding 150 Mbytes/second, indicating the viability of the computing models. 1 Introduction The Large Hadron Collider (LHC) is currently under construction at the European Laboratory for Particle Physics in CERN, Geneva, Switzerland. Due to start operation in 2005, the LHC will collide particles (protons) at energies up to 14 TeV, the highest energy collisions yet achieved. Analysis of the collisions will hopefully uncover the Higgs particle, which is believed to be responsible for giving all other particles their mass. Finding the Higgs, or proving that it does not exist, is currently the Holy Grail of particle physics. Collisions in the LHC are expected to occur at a rate of about 800 million per second. Of these millions of events, only about 100 (or percent) are expected to reveal the Higgs particle. The collisions take place inside massive detectors, whose task is to identify and select these candidate events for recording, a process called triggering. The triggering rate in the two main LHC detectors is expected to be approximately 100 Hz. Each candidate event is comprised of approximately 1 MByte of combined data from the very many sub-elements of the detector. The raw event data thus emerge from the detector s electronic data acquisition (DAQ) system at a rate of around 100 MBytes per second. The raw event data at the LHC will amount to several PetaBytes ( bytes) per year, each year for the estimated twenty year lifetime of the experiments. The data are already highly compressed 1

2 when they emerge from the DAQ system, and they must be stored in their entirety. From these data, reconstructions of physics objects, such as tracks, clusters and jets will take place in near real time on dedicated processor farms of an estimated MIPS. The reconstructed objects will add about 200 kbytes of extra information to each event. By the time the LHC programme reaches maturity, projections indicate that the total event data volume will be in excess of 100 PetaBytes. Managing this quantity of data, and making it available to the large multinational community of physicists participating in the CERN Physics Programme, is an unprecedented computing challenge. It is a tenet of the community that physicists working at institutes remote from CERN should enjoy the same level of access to the data as their colleagues located at CERN. This imposes the condition on the LHC Computing Models that either the data be continuously transported across the network or that analysis tasks (or queries) be moved as close to the data as possible. In practice, rapid decisions on whether to move the data in the network, or move the task, have to be made. To tackle the scale and complexity of the data, the currently favoured technologies for the LHC Computing Models include Object Oriented software to support the data model, distributed Object Database Management Systems (ODBMS) to manage the persistency of the physics objects, and Hierarchical Storage Management systems to cope with the quantity of data, and support access to hot and cold event data. The GIOD (Globally Interconnected Object Databases) Project [2], a joint effort between Caltech, CERN and Hewlett Packard Corporation, has been investigating the use of WAN-distributed Object Databases and Mass Storage systems for LHC data. We have been using several key hardware and software technologies for our tests, including a 256 CPU Caltech HP Exemplar of 0.1 TIPS, the High Performance Storage System (HPSS) from IBM, the Objectivity/DB ODBMS, and various high speed Local Area and Wide Area networks. One particular focus of our work has been on measuring the capability of the Object Database to support hundreds of simultaneous clients allow reading and writing at aggregate data rates of 100 MBytes/second scale to the complexity and size of the LHC data. In this paper, we report on scalability tests of the Objectivity/DB object database [3] made on the 256-processor HP Exemplar located at Caltech s Center for Advanced Computing Research. Our tests focused on the behaviour of the aggregate throughput as a function of the number of database clients, under various representative workloads, and using realistic (simulated) LHC event data 2 Testing platform and Object database The scalability tests were performed on the HP Exemplar machine at Caltech, a 256 CPU SMP machine of some 0.1 TIPS. The machine consists of 16 nodes, which are connected by a special- 2

3 purpose fast network called a CTI (see figure 1). Each node contains 16 PA8000 processors and one node file system. A node file system consists of 4 disks with 4-way striping, with a file system block size of 64 KB and a maximum raw I/O rate of 22 MBytes/second. We used up to 240 processors and up to 15 node file systems in our tests. We ensured that data was always read from disk, and never from the file system cache. An analysis of the raw I/O behaviour of the Exemplar can be found in [4]. Nodes CTI Nodes Figure 1: Configuration of the HP Exemplar at Caltech The Exemplar runs a single operating system image, and all node file systems are visible as local UNIX file systems to any process running on any node. If the process and file system are on different nodes, data is transported over the CTI. The CTI was never a bottleneck in the test loads we put on the machine: it was designed to support shared memory programming and can easily achieve data rates in the GBytes/second range. As such, the Exemplar can be thought of as a farm of sixteen 16-processor UNIX machines with cross-mounted file systems, and a semi-infinite capacity network. Though the Exemplar is not a good model for current UNIX or PC farms, where network capacity is a major constraining factor, it is perhaps a good model for future farms which use GBytes/second networks like Myrinet [5] as an interconnect. The object database tested was the HP-UX version of Objectivity/DB [3]. The Objectivity/DB architecture comprises a federation of databases. All databases in the federation share a common Object scheme, and are indexed in a master catalog. Each database is a file. Each database contains one or more containers. Each container is structured as a set of pages (of a unique size) onto which the persistent objects are mapped. The database can be accessed by clients which are applications linked against the Objectivity/DB libraries and the database schema files. Client access to local databases is achieved via the local file system. Access to remote databases is made via an Advanced Multithreaded Server which returns database pages across the network to the client. Database locks are managed by a lockserver process. Locks operate at the database container level. We report on two sets of tests, completed with different database configurations and different data. 3 Tests with synthetic data Our first round of tests used synthetic event data represented as sets of 10 KByte objects. A 1 MByte event thus became a set of 100 objects of 10 KB. Though not realistic in terms of 3

4 physics, this approach does have the advantage of giving cleaner results by eliminating some potential sources of complexity. For these tests we used Objectivity/DB v We placed all database elements (database clients, database lockserver, federation catalog file) on the Exemplar itself. Database clients communicated with the lockserver via TCP/IP sockets, but all traffic was local inside the supercomputer. The federation catalog and the payload data were accessed by the clients though the Exemplar UNIX filesystem interface. The test loads were generated with the TOPS framework [6] which runs on top of Objectivity. Two things in the Objectivity architecture were of particular concern. First, Objectivity does not support a database page size of 64 KB, it only supports sizes up to 64 KB minus a few bytes. Thus, it does not match well to the node file systems which have a block size of exactly 64 KB. After some experiments we found that a database page size of 32 KB was the best compromise, so we used that throughout our tests. Second, the Objectivity architecture uses a single lockserver process to handle all locking operations. This lockserver could become a bottleneck when the number of (lock requests from) clients increases. 3.1 Reconstruction test In particle physics, an event occurs when two particles collide inside a physics detector. Event reconstruction is the process of computing physical interpretations (reconstructed data) of the raw event data measured by the detector. We have tested the database under an event reconstruction workload with up to 240 clients. In this workload, each client runs a simulated reconstruction job on its own set of events. For one event, the actions are as follows: Reading: 1 MB of raw data is read, as 100 objects of 10 KB. The objects are read from 3 containers: 50 from the first, 25 from the second, and 25 from the third. Inside the containers, the objects are clustered sequentially in the reading order. Writing: 100 KB of reconstructed data is written, as 10 objects of 10KB, to one container. Computation: MIPSs are spent per event (equivalent to 5 CPU seconds on one Exemplar CPU). Reading, writing, and computing are interleaved with one another. The data sizes are derived from the CMS computing technical proposal [1]. The proposal predicts a computation time of MIPSs per event. However, it also predicts that CPUs will be 100 times more powerful (in MIPS per $) at LHC startup in We expect that disks will only be a factor 4 more powerful (in MBytes/second per $) in In our test we chose a computation time of MIPSs per event as a compromise. The clustering strategy for the raw data is based on [7]. The detector is divided into three separate parts and data from each part are clustered separately in different containers. This allows faster access for analysis tasks which only require need some parts of the detector. The database files are divided over four Exemplar node file systems, with the federation 4

5 catalog and the journal files on a fifth file system. In reading the raw data, we used the read-ahead optimisation described in section 4. Aggregate throughput (MB/s) * 10 3 MIPSs/event 2 * 10 3 MIPSs/event Number of clients Figure 2: Scalability of reconstruction workloads The results from our tests are shown in figure 2. The solid curve shows the aggregate throughput for the CMS reconstruction workload described above. The aggregate throughput (and thus the number of events reconstructed per second) scales almost linearly with the number of clients. In the left part of the curve, 91% of the allocated CPU resources are spent running actual reconstruction code. With 240 clients, 83% of the allocated CPU power (240 CPUs) is used for physics code, yielding an aggregate throughput of 47 MBytes/second (42 events/s), using about 0.1 TIPS. The dashed curve in figure 2 shows a workload with the same I/O profile as described above, but half as much computation. This curve shows a clear shift a from CPU-bound to a disk-bound workload at 160 clients. The maximum throughput is 55 MBytes/second, which is 63% of the maximum raw throughput of the four allocated node file systems (88 MBytes/second). Overall, the disk efficiency is less good than the CPU efficiency. The mismatch between database and file system page sizes discussed in section 2 is one obvious contributing factor to this. In tests with fewer clients on a platform with a 16 KByte file system page size, we have seen higher disk efficiencies for similar workloads. 5

6 3.2 The read-ahead optimisation When reading raw data from the containers in the above reconstruction tests, we used a readahead optimisation layer built into our testbed. The layer takes the form of a specialised iterator, which causes the database to read containers in bursts of 4 MByte (128 pages) at a time. Without this layer, the (simulated) physics application would produce single page reads interspersed with computation. Tests have shown that such less bursty reading leads to a loss of I/O performance. In [7] we discussed I/O performance tests for a single client iterating through many containers, with and without the read-ahead optimisation. Here, we will consider the case of N clients all iterating through N containers, with each client accessing one container only. The computation in each client is again MIPSs per Megabyte read. Containers are placed in databases on two node file systems, which have a combined raw throughput of 44 MBytes/second. 35 Aggregate throughput (MB/s) with read ahead without read ahead Number of clients Figure 3: Performance of many clients all performing sequential reading on a container Figure 3 shows that without the read-ahead optimisation, the workload becomes disk-bound fairly quickly, at 64 clients. Apparently, a lot of time is lost in disk seeks between the different containers. In this test, the lack of a read-ahead optimisation degrades the maximum I/O performance with a factor of two. Because of the results in [7], we expect that the performance would have been degraded even more in the reconstruction test of section 3, where each client reads from three containers. 6

7 3.3 DAQ test In this test, each client is writing a stream of 10 KByte objects to its own container. For every event (1 MByte raw data) written, about 180 MIPSs (0.45 CPU seconds on the Exemplar) are spent in simulated data formatting. For comparison, 0.20 CPU seconds are spent by Objectivity in object creation and writing, and the operating system spends 0.01 CPU seconds per event. No read operations on flat files or network reads are done by the clients. The database files are divided over eight node file systems, with the federation catalog and the journal files on a ninth file system. Aggregate throughput (MB/s) Number of clients Figure 4: Scalability of a DAQ workload The test results are shown in figure 4. Again we see a transition from a CPU-bound to a diskbound workload. The highest throughput is 145 MBytes/second at 144 clients, which is 82% of the maximum raw throughput of the eight allocated node file systems (176 MBytes/second). In workloads above 100 clients, when the node file systems become saturated with write requests, these file systems show some surprising behaviour. It can take very long, several minutes, to perform basic operations like syncing a file (which is done by the database when committing a transaction) or creating a new (database) file. We believe this is due to the appearance of long file system write request queues in the operating system. During the test, other file systems not saturated with write requests still behave as usual. We conclude from this that one should be careful in saturating file systems with write requests: unexpected long slowdowns may occur. 7

8 Seconds since client start First object read Transaction initialised 3.4 Client startup Client sequence number Figure 5: Client startup in the MIPSs reconstruction test We measured the scalability of client startup times throughout our tests. We found that the client startup time depends on the number of clients already running and on the number of clients being started at the same time. It depends much less on the database workload, at least if the federation catalog and journal files are placed on a file system that is not heavily loaded. With heavily loaded catalog and journal file systems, startup times of many minutes have been observed. Figure 5 shows a startup time profile typical for our test workloads. Here, new clients are started in batches of 16. For client number 240, the time needed to open the database and initialise the first database transaction is about 20 seconds. The client then opens four containers (located in three different database files), reads some indexing data structures, and initialises its reconstruction loop. Some 60 seconds after startup, the first raw data object is read. If a single new client number 241 is started by itself, opening the database and initialising the transaction takes some 5 seconds. 4 Tests with real physics data Our second round of tests wrote realistic physics event data into the database. These data were generated from a pool of around one million fully simulated LHC multi-jet QCD events (figure 8

9 6). The simulated events were used to populate the Objectivity database according to an object scheme that fully implemented the complex relationships between the components of the events. The average size of the events used in the tests was 260 KB. Figure 6: A typical event with its tracks, detector space points and energy clusters In these tests we used Objectivity/DB v5.0. Only the database clients and the payload database files were located on the Exemplar system. The lockserver was run on an HP workstation connected to the Exemplar via a LAN. The database clients contacted the lockserver over TCP/IP connections. The federation catalog was placed on a C200 HP workstation, connected to the Exemplar over a dedicated ATM link (155 Mbits/second). The clients accessed the catalog over TCP/IP connections to the Objectivity/DB AMS server, which ran on the C200 workstation. The event data in these tests were written using the software developed in the GIOD project [2]. Each database client first read 12 events into memory, then wrote them out repeatedly into its own dedicated database file. Once the database file reached a size of about 600 MBytes, it was closed and deleted by the client. Then the client created and filled a new database file. This was arranged to avoid exhausting file system space during the tests. In a real DAQ system, periodic switches to new database files would also occur, whilst retaining the old database files. 9

10 Database files were evenly distributed over 15 node file systems on the Exemplar. Of these node file systems, ten contain 4 disks are rated at 22 Mbytes/second raw, the remaining five contain fewer disks and achieve a lower throughput. The 15 node filesystems used contain 49 disks in total. We used two different data models for the event data to be written: In one set of tests, we wrote data in the GIOD data model which is the data model developed in the GIOD project [2]. In this data model, a raw event consists of 6 objects, each of these objects placed in a different Objectivity container in the same database file, and with object associations (links) between these objects. The objects in the GIOD data model are shown in figure 7. Top level event object Object association (link) Top level raw event object 4 objects containing detector hitmaps Figure 7: Objects and their relations in the GIOD data model We ran another set of tests to quantify the overheads associated with the GIOD event data model. These tests used a simpler 1 container data model, in which all 6 objects in the GIOD raw event were written to a single container, without object associations being created. 4.1 Test results We ran tests with 15, 30, and 45 database clients writing events concurrently to the federated database, with the two different data models discussed above. Figure 8 shows the test results. The 1 container data model shows a best aggregate throughput rate of 172 MBytes/second, reached with 45 running clients. With the GIOD data model a rate of 154 MBytes/second was achieved when running with 30 clients. We note that the overhead associated with the GIOD model event structure is not significant. 4.2 Analysis of the scaling limit in figure 8 In the earlier tests with synthetic data (section 3), the scaling curves flatten out, when more clients are added, because of limits to the available disk bandwidth on the Exemplar. In the real physics 10

11 Aggregate throughput (MBytes/second) container data model GIOD data model Number of database clients writing concurrently Figure 8: DAQ tests with real physics data data tests of figure 8, the curves flatten out before the available disk bandwidth is saturated. In this case we found that an access bottleneck to the federation catalog file was the limiting factor. In the tests of figure 8, the catalog file is located remotely on a C200 workstation connected to the Exemplar with an ATM link. A client needs to access the catalog file whenever it creates a new database, and whenever it deletes a database after closing it on reaching the 600 MB limit discussed above. Throughout our tests, we found that no more than about 18 pairs of delete and create new database actions could be performed every minute. This was irrespective of the number of clients running: in related tests we ran with up to 75 clients, and observed that only about 30 to 45 clients were actively writing to the database at the same time. All remaining clients were busy waiting for their turn to access the remote catalog file. The bottleneck in access to the remote catalog file was caused by a saturation of the single CPU on the C200 workstation holding the catalog. The AMS server process on the C200, which provided remote access to the catalog file, used only some 10 20% of the available CPU time. The remainder of the CPU time was spent in kernel mode, though we are not sure on what. The dedicated ATM link between the C200 workstation and the Exemplar was not saturated during our tests: peak observed throughputs were 1.2 MBytes/second, well below its 155 Mbits/second capacity. Most (80%) of the ATM traffic was towards the Exemplar system, consistent with the database clients reading many index pages from the catalog file, and updating only a few. An obvious way to improve on the scaling limit is to create larger database files, or to put the 11

12 catalog file locally on the Exemplar system, as was done in the tests with synthetic data (section 3). Another option is to create a large number of empty database files in advance. 5 The lockserver The lockserver, whether run remotely or locally on the Exemplar, was not a bottleneck in any of our tests. From a study of lockserver behaviour under artificial database workloads with a high rate of locking, we estimate that lockserver communication may become a bottleneck in a DAQ scenario above 1000 MBytes/second. 6 Conclusions In the first series of tests, with all components of the Objectivity/DB system located on the Exemplar, we observed almost ideal scalability, up to 240 clients, under synthetic physics reconstruction and DAQ workloads. The utilisation of allocated CPU resources on the Exemplar is excellent, with reasonable to good utilisation of allocated disk resources. It should be noted that the Exemplar has a very fast internal network. In the second series of tests the database clients were located on the Exemplar, and the Objectivity lockserver, AMS and catalog were located remotely. In this configuration, the system achieved aggregate write rates into the database of more than 170 MBytes/second. This exceeds the 100 MBytes/second required by the DAQ systems of the two main LHC experiments. Our measurements confirm the viability of using commercial Object Database Management Systems for large scale particle physics data storage and analysis. References [1] CMS Computing Technical Proposal. CERN/LHCC 96-45, CMS collaboration, 19 December [2] The GIOD project, Globally interconnected object databases. [3] Objectivity/DB. Vendor homepage: [4] R. Bordawekar, Quantitative Characterization and Analysis of the I/O behavior of a Commercial Distributed-shared-memory Machine. CACR Technical Report 157, March To appear in the Seventh Workshop on Scalable Shared Memory Multiprocessors, June See also rajesh/exemplar1.html [5] Myrinet network products. Vendor homepage: [6] TOPS, Testbed for Objectivity Performance and Scalability, V1.0. Available from kholtman/ 12

13 [7] K. Holtman, Clustering and Reclustering HEP Data in Object Databases. Proc. of CHEP 98, Chicago, USA. 13

CACR Technical Report

CACR Technical Report CACR Technical Report CACR-183 February 2000 The GIOD Project: Globally Interconnected Object Databases Julian J. Bunn, Koen Holtman, Harvey B. Newman, Richard P. Wilkinson Mailing Address: CACR Technical

More information

Objectivity Data Migration

Objectivity Data Migration Objectivity Data Migration M. Nowak, K. Nienartowicz, A. Valassi, M. Lübeck, D. Geppert CERN, CH-1211 Geneva 23, Switzerland In this article we describe the migration of event data collected by the COMPASS

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Distributed applications monitoring at system and network level

Distributed applications monitoring at system and network level Distributed applications monitoring at system and network level Monarc Collaboration 1 Abstract Most of the distributed applications are presently based on architectural models that don t involve real-time

More information

DSS. Diskpool and cloud storage benchmarks used in IT-DSS. Data & Storage Services. Geoffray ADDE

DSS. Diskpool and cloud storage benchmarks used in IT-DSS. Data & Storage Services. Geoffray ADDE DSS Data & Diskpool and cloud storage benchmarks used in IT-DSS CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/it Geoffray ADDE DSS Outline I- A rational approach to storage systems evaluation

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

A High-Performance Storage System for the LHCb Experiment Juan Manuel Caicedo Carvajal, Jean-Christophe Garnier, Niko Neufeld, and Rainer Schwemmer

A High-Performance Storage System for the LHCb Experiment Juan Manuel Caicedo Carvajal, Jean-Christophe Garnier, Niko Neufeld, and Rainer Schwemmer 658 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 57, NO. 2, APRIL 2010 A High-Performance Storage System for the LHCb Experiment Juan Manuel Caicedo Carvajal, Jean-Christophe Garnier, Niko Neufeld, and Rainer

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Event Logging and Distribution for the BaBar Online System

Event Logging and Distribution for the BaBar Online System LAC-PUB-8744 August 2002 Event Logging and Distribution for the BaBar Online ystem. Dasu, T. Glanzman, T. J. Pavel For the BaBar Prompt Reconstruction and Computing Groups Department of Physics, University

More information

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General

More information

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage

More information

New Design and Layout Tips For Processing Multiple Tasks

New Design and Layout Tips For Processing Multiple Tasks Novel, Highly-Parallel Software for the Online Storage System of the ATLAS Experiment at CERN: Design and Performances Tommaso Colombo a,b Wainer Vandelli b a Università degli Studi di Pavia b CERN IEEE

More information

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented

More information

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920

More information

Online Performance Monitoring of the Third ALICE Data Challenge (ADC III)

Online Performance Monitoring of the Third ALICE Data Challenge (ADC III) EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH European Laboratory for Particle Physics Publication ALICE reference number ALICE-PUB-1- version 1. Institute reference number Date of last change 1-1-17 Online

More information

WHITE PAPER BRENT WELCH NOVEMBER

WHITE PAPER BRENT WELCH NOVEMBER BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5

More information

CLEO III Data Storage

CLEO III Data Storage CLEO III Data Storage M. Lohner 1, C. D. Jones 1, Dan Riley 1 Cornell University, USA Abstract The CLEO III experiment will collect on the order of 200 TB of data over the lifetime of the experiment. The

More information

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based

More information

Scalable stochastic tracing of distributed data management events

Scalable stochastic tracing of distributed data management events Scalable stochastic tracing of distributed data management events Mario Lassnig mario.lassnig@cern.ch ATLAS Data Processing CERN Physics Department Distributed and Parallel Systems University of Innsbruck

More information

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion

More information

A Dataflow Meta-Computing Framework for Event Processing in the H1 experiment

A Dataflow Meta-Computing Framework for Event Processing in the H1 experiment A Dataflow Meta-Computing Framework for Event Processing in the H1 experiment Alan Campbell 1, Ralf Gerhards 1, Christoph Grab 2, Janusz Martyniak 3, Tigran Mkrtchyan 1, Sergey Levonian 1, Jacek Nowak

More information

Bernie Velivis President, Performax Inc

Bernie Velivis President, Performax Inc Performax provides software load testing and performance engineering services to help our clients build, market, and deploy highly scalable applications. Bernie Velivis President, Performax Inc Load ing

More information

Capacity Planning Process Estimating the load Initial configuration

Capacity Planning Process Estimating the load Initial configuration Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting

More information

Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage

Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage White Paper July, 2011 Deploying Citrix XenDesktop on NexentaStor Open Storage Table of Contents The Challenges of VDI Storage

More information

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com info@veritest.com Microsoft Windows

More information

Tools and strategies to monitor the ATLAS online computing farm

Tools and strategies to monitor the ATLAS online computing farm 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Tools and strategies to monitor the ATLAS online computing farm S. Ballestrero 1,2, F. Brasolin 3, G. L. Dârlea 1,4, I. Dumitru 4, D. A. Scannicchio 5, M. S. Twomey

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

Backup architectures in the modern data center. Author: Edmond van As edmond@competa.com Competa IT b.v.

Backup architectures in the modern data center. Author: Edmond van As edmond@competa.com Competa IT b.v. Backup architectures in the modern data center. Author: Edmond van As edmond@competa.com Competa IT b.v. Existing backup methods Most companies see an explosive growth in the amount of data that they have

More information

POSIX and Object Distributed Storage Systems

POSIX and Object Distributed Storage Systems 1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

Virtuoso and Database Scalability

Virtuoso and Database Scalability Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Recommended hardware system configurations for ANSYS users

Recommended hardware system configurations for ANSYS users Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range

More information

The Resilient Smart Grid Workshop Network-based Data Service

The Resilient Smart Grid Workshop Network-based Data Service The Resilient Smart Grid Workshop Network-based Data Service October 16 th, 2014 Jin Chang Agenda Fermilab Introduction Smart Grid Resilience Challenges Network-based Data Service (NDS) Introduction Network-based

More information

Rackspace Cloud Databases and Container-based Virtualization

Rackspace Cloud Databases and Container-based Virtualization Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many

More information

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.

More information

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0 Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

The Bus (PCI and PCI-Express)

The Bus (PCI and PCI-Express) 4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the

More information

Infrastructure Matters: POWER8 vs. Xeon x86

Infrastructure Matters: POWER8 vs. Xeon x86 Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report

More information

Throughput Capacity Planning and Application Saturation

Throughput Capacity Planning and Application Saturation Throughput Capacity Planning and Application Saturation Alfred J. Barchi ajb@ajbinc.net http://www.ajbinc.net/ Introduction Applications have a tendency to be used more heavily by users over time, as the

More information

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a

More information

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

SIDN Server Measurements

SIDN Server Measurements SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources

More information

RevoScaleR Speed and Scalability

RevoScaleR Speed and Scalability EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution

More information

Grid Scheduling Dictionary of Terms and Keywords

Grid Scheduling Dictionary of Terms and Keywords Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status

More information

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 (Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:

More information

Benchmarking Cassandra on Violin

Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract

More information

Performance And Scalability In Oracle9i And SQL Server 2000

Performance And Scalability In Oracle9i And SQL Server 2000 Performance And Scalability In Oracle9i And SQL Server 2000 Presented By : Phathisile Sibanda Supervisor : John Ebden 1 Presentation Overview Project Objectives Motivation -Why performance & Scalability

More information

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark.

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Report Date: 27, April 2015. www.iomark. IOmark- VDI HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC- 150427- b Test Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark, and IOmark

More information

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links Filippo Costa on behalf of the ALICE DAQ group DATE software 2 DATE (ALICE Data Acquisition and Test Environment) ALICE is a

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12

Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12 Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using

More information

1.5 Distributed Systems

1.5 Distributed Systems 1.5 Distributed Systems A network, in the simplest terms, is a communication path between two or more systems. Distributed systems depend on networking for their functionality. By being able to communicate,

More information

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF)

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Gerardo Ganis CERN E-mail: Gerardo.Ganis@cern.ch CERN Institute of Informatics, University of Warsaw E-mail: Jan.Iwaszkiewicz@cern.ch

More information

Running a Workflow on a PowerCenter Grid

Running a Workflow on a PowerCenter Grid Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

More information

1 Storage Devices Summary

1 Storage Devices Summary Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious

More information

Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju

Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju Chapter 7: Distributed Systems: Warehouse-Scale Computing Fall 2011 Jussi Kangasharju Chapter Outline Warehouse-scale computing overview Workloads and software infrastructure Failures and repairs Note:

More information

White Paper on Consolidation Ratios for VDI implementations

White Paper on Consolidation Ratios for VDI implementations White Paper on Consolidation Ratios for VDI implementations Executive Summary TecDem have produced this white paper on consolidation ratios to back up the return on investment calculations and savings

More information

SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform

SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform INTRODUCTION Grid computing offers optimization of applications that analyze enormous amounts of data as well as load

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required WP_v03_20140618 Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Abstract: Initial load tests revealed that the capacity of a customized Microsoft Office SharePoint Server (MOSS) website cluster

More information

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should

More information

Audit & Tune Deliverables

Audit & Tune Deliverables Audit & Tune Deliverables The Initial Audit is a way for CMD to become familiar with a Client's environment. It provides a thorough overview of the environment and documents best practices for the PostgreSQL

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

A Survey of Shared File Systems

A Survey of Shared File Systems Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...

More information

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance

More information

How To Improve Performance On A Single Chip Computer

How To Improve Performance On A Single Chip Computer : Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings

More information

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture CSE 544 Principles of Database Management Systems Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture References Anatomy of a database system. J. Hellerstein and M. Stonebraker. In Red Book (4th

More information

Binary search tree with SIMD bandwidth optimization using SSE

Binary search tree with SIMD bandwidth optimization using SSE Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

IBM PowerHA SystemMirror for i. Performance Information

IBM PowerHA SystemMirror for i. Performance Information IBM PowerHA SystemMirror for i Performance Information Version: 1.0 Last Updated: April 9, 2012 Table Of Contents 1 Introduction... 3 2 Geographic Mirroring... 3 2.1 General Performance Recommendations...

More information

Load Testing and Monitoring Web Applications in a Windows Environment

Load Testing and Monitoring Web Applications in a Windows Environment OpenDemand Systems, Inc. Load Testing and Monitoring Web Applications in a Windows Environment Introduction An often overlooked step in the development and deployment of Web applications on the Windows

More information

Chapter 18: Database System Architectures. Centralized Systems

Chapter 18: Database System Architectures. Centralized Systems Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

Evaluation of 40 Gigabit Ethernet technology for data servers

Evaluation of 40 Gigabit Ethernet technology for data servers Evaluation of 40 Gigabit Ethernet technology for data servers Azher Mughal, Artur Barczyk Caltech / USLHCNet CHEP-2012, New York http://supercomputing.caltech.edu Agenda Motivation behind 40GE in Data

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database

An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always

More information

An objective comparison test of workload management systems

An objective comparison test of workload management systems An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: sfiligoi@fnal.gov Abstract. The Grid

More information

WHITE PAPER. Header Title. Side Bar Copy. Real-Time Replication Is Better Than Periodic Replication WHITEPAPER. A Technical Overview

WHITE PAPER. Header Title. Side Bar Copy. Real-Time Replication Is Better Than Periodic Replication WHITEPAPER. A Technical Overview Side Bar Copy Header Title Why Header Real-Time Title Replication Is Better Than Periodic Replication A Technical Overview WHITEPAPER Table of Contents Introduction...1 Today s IT Landscape...2 What Replication

More information

Ready Time Observations

Ready Time Observations VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required Think Faster. Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang Distributed RAID Architectures for Cluster I/O Computing Kai Hwang Internet and Cluster Computing Lab. University of Southern California 1 Presentation Outline : Scalable Cluster I/O The RAID-x Architecture

More information

Next Generation Operating Systems

Next Generation Operating Systems Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015 The end of CPU scaling Future computing challenges Power efficiency Performance == parallelism Cisco Confidential 2 Paradox of the

More information

- An Oracle9i RAC Solution

- An Oracle9i RAC Solution High Availability and Scalability Technologies - An Oracle9i RAC Solution Presented by: Arquimedes Smith Oracle9i RAC Architecture Real Application Cluster (RAC) is a powerful new feature in Oracle9i Database

More information

Computing at the HL-LHC

Computing at the HL-LHC Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,

More information

An Implementation Of Multiprocessor Linux

An Implementation Of Multiprocessor Linux An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than

More information

A Guide to Getting Started with Successful Load Testing

A Guide to Getting Started with Successful Load Testing Ingenieurbüro David Fischer AG A Company of the Apica Group http://www.proxy-sniffer.com A Guide to Getting Started with Successful Load Testing English Edition 2007 All Rights Reserved Table of Contents

More information