LDM-129: Data Management Infrastructure Design
|
|
- Sydney Baker
- 8 years ago
- Views:
Transcription
1 LDM-129: Data Management Infrastructure Design Release 3.0 Mike Freemon, Jeff Kantor October 11, 2013
2
3 Contents 1 2 Infrastructure Components Facilities National Petascale Computing Facility, Champaign, IL, US NOAO Facility, La Serena, Chile Floorspace, Power, and Cooling Computing Storage Mass Storage Databases Additional Support Servers Cluster Interconnect and Local Networking Long Haul Network Policies Replacement Policy Storage Overheads Spares (hardware failures) Extra Capacity Disaster Recovery CyberSecurity Change Record 37 i
4 ii
5 The Data Management Infrastructure is composed of all computing, storage, and communications hardware and systems software, and all utility systems supporting it, that form the platform of execution and operations for the DM System. All DM System Applications and Middleware are developed, integrated, tested, deployed, and operated on the DM Infrastructure. This document describes the design of the DM Infrastructure at the highest level of discussion. It is the umbrella document over many other referenced documents that elaborate on the design in greater detail. Figure 1: The 3-layered architecture of the Data Management System enables scalability, reliability, and evolutionary capability. The DM System is distributed across four sites in the United States and Chile. Each site hosts one or more physical facilities, in which reside DM Centers. Each Center performs a specific role in the operational system. The Base Center is in the Base Facility on the AURA compound in La Serena, Chile. The primary role of the Base Center is: Data Access L3 Community Resources The Archive Center is in the National Petascale Computing Facility at NCSA in Champaign, IL. The primary role of the Archive Site is: Alert Production processing Data Release Production processing Calibration Production processing Data Access Education and Public Outreach (EPO) Infrastructure L3 Community Resources Contents 1
6 Figure 2: Data Management Sites, Facilities, and Centers. 2 Contents
7 Both sites have copies of all the raw and released data for data access and disaster recovery purposes. The Base and Archive Sites host the respective Base and Archive Centers, plus a co-located Data Access Center (DAC). The Headquarters Site final location is not yet determined, but for planning and design purposes is assumed to be in Tucson, Arizona. While the Base and Archive Sites provide large-scale data production and data access at supercomputing center scale, the Headquarters is a management and supervisory control center for the DM System, and as a result is much more modest in terms of infrastructure. The DMS data centers are modern, high-end computing facilities consisting of significant compute, storage, and networking resources that are needed to support the generation and access to the LSST data products. The remainder of this document describes those computing resources and technologies in more detail. Contents 3
8 4 Contents
9 CHAPTER 1 2 Infrastructure Components The Infrastructure is organized into components, which are each composed of hardware and software integrated and deployed as an assembly, to provide computing, communications, and/or storage capabilities required by the DM System. lists the major infrastructure components of the DM system, and indicates if those items are needed for the Center, DAC, or both. It is the shared infrastructure that reduces overall costs and motivates the use of co-location for the Data Access Centers at the Archive and Base Sites. Table 1.1: The major components of the LSST DM Infrastructure. Center Shared DAC Compute for AP, DRP, CPP, MOPS Scratch disk for AP, DRP, CPP AP Database DRP Database DMCS Servers Data Replication Servers VOEvent Brokers Disk storage for: Postage stamps Coadds Templates Master Calibration Image Cache Calibration Database (*) E&F Database (*) Local Area Networking (*) Tape Library MSS Disk Cache (*) Connectivity to the public internet (*) Logging Servers (*) MQ Servers (*) L1 Database (*) L2 Database (*) L3 Database (*) L3 Community Scratch (*) L3 Community Images L3 Community Compute (*) On-Demand Service Cutout Service Color JPG service DMCS Servers (*) Both the Base Center and the Archive Center have essentially the same architecture (Figure 1.1), differing only by capacity and quantity. There are different external network interfaces depending on the site. The capacities and quantities are derived from the scientific, system, and operational requirements via a detailed sizing model. The complete sizing model and the process used to arrive at the infrastructure is available in the LSST Project Archive. A summary is provided in Table 1.2 and Table
10 Figure 1.1: Infrastructure Components at the Archive and Base Sites. Table 1.2: Compute and Facilities Summary. Category Archive Site Base Site Teraflops (sustained) Compute Notes (1200 hwm) (115 hwm) Cores 45K 180K 7K 10K Memory Bandwidth TB/s 3 6 TB/s Database Teraflops (sustained) Nodes (360 hwm) (340 hwm) Floorspace sq ft (875 hwm) sq ft (500 hwm) Facilities Power kw (610 hwm) kw (220 hwm) Cooling mmbtu (2.1 hwm) mmbtu (0.7 hwm) 6 Chapter 1. 2 Infrastructure Components
11 Table 1.3: Storage summary. Type Archive Site Base Site Capacity PB PB Image Disk Storage Drives (1700 hwm) (1140 hwm) Disk Bandwidth GB/s GB/s Capacity PB PB Database Disk Storage Drives (3000 hwm) (2200 hwm) Disk Bandwidth GB/s GB/s Capacity PB PB Near-line Tape Storage Tapes Tape Bandwidth GB/s GB/s Capacity PB N/A Offsite Tape Storage Tapes N/A Tape Bandwidth GB/s N/A This design assumes that the DM System will be built using commodity parts that are not bleeding edge, but rather have been readily available on the market for one to two years. This choice is intended to lower the risk of integration problems and the time to build a working, production-level system. This also defines a certain cost class for the computing platform that can be described in terms of technology available today. We then assume that we will be able to purchase a system in 2020 in this same cost class with the same number of dollars (ignoring inflation); however, the performance of that system will be greater than the corresponding system purchased today by some performance evolution curve factor. Finally, note that Base Site equipment is purchased in the U.S., and delivered by the equipment vendors to the Archive Site in Champaign, IL. NCSA installs, configures, and tests the Base Site equipment before shipping to La Serena. The anticipated network between the Archive Site at NCSA and the Base Site in La Serena, Chile, should be sufficient to transfer LSST s Data Products over the network from NCSA to La Serena. The fallback plan is for NCSA to load the physical storage destined for La Serena with the data products and transfer the data via physical media as part of the annual hardware acquisition cycle. 7
12 8 Chapter 1. 2 Infrastructure Components
13 CHAPTER 2 3 Facilities This section describes the operational characteristics of the facilities in which the DM infrastructure resides National Petascale Computing Facility, Champaign, IL, US Figure 2.1: National Petascale Computing Facility in Champaign, IL, US. The National Petascale Computing Facility (NPCF) is a new data center facility on the campus of the University of Illinois. It was built specifically to house the Blue Waters system, but will also host the LSST Data Management systems. The key characteristics of the facility are: 24MW of power (1/4 of campus electric usage) 5900 tons of CHW cooling F3 tornado & Seismic resistant design NPCF is expected to achieve LEED Gold certification, a benchmark for the design, construction, and operation of green buildings. NPCF s forecasted power usage effectiveness (PUE) rating is an impressive 1.1 to 1.2, while a typical data center rating is 1.4. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it, so efficiency is greater as the quotient decreases toward 1. Three on-site cooling towers will provide water chilled by Mother Nature about 70 percent of the year. Power conversion losses will be reduced by running 480 volt AC power to compute systems. 9
14 The facility will operate continually at the high end of the American Society of Heating, Refrigerating and Air- Conditioning Engineers standards, meaning the data center will not be overcooled. Equipment must be able to operate with a 65F inlet water temperature and a 78F inlet air temperature. Provides high-performance Ethernet connections as required with up to 300 gigabit external network. There is no UPS in the PCF. LSST will install rack-based UPS systems to keep systems running during brief power outages and to automatically manage controlled shutdowns when extended power outages occur. This ensures that file system buffers are flushed to disk to prevent any data loss. The fire suppression system at the NPCF is a double action water system. The first triggering event loads the sprinklers with water and pressurizes it in the system. The water is not released unless the second trigger occurs NOAO Facility, La Serena, Chile NOAO is expanding their facility in La Serena, Chili, in order to accommodate the LSST project. Refer to the Base Site design in the Telescope and Site Subsystem for more detail. The DM requirements for the Base Facility are documented in LSE-77. Figure 2.2: Floorplan of NOAO facility in La Serena, Chile. 10 Chapter 2. 3 Facilities
15 Floorspace, Power, and Cooling Table 2.1: Floorspace, power, and cooling estimates for the Data Management System. Facilities Archive Site Base Site Floorspace sq ft (875 hwm) sq ft (500 hwm) Power kw (610 hwm) kw (220 hwm) Cooling mmbtu (2.1 hwm) mmbtu (0.7 hwm) Figure 2.3: Power and floorspace needed by the Data Management System over the survey period. Table 2.1 and Figure 2.3 shows the facilities usage by the LSST Data Management System over the survey period. This does not include any extra space that might be needed during the process of transitioning replacement equipment or staging of Base Site equipment at the Archive Site. Note that the current baseline for power, cooling, and floor space assumes air-cooled equipment. If the sizing model or technology trends change and we find that flops-per-watt is the primary constraint in our system design, we will evaluate water-cooled systems Floorspace, Power, and Cooling 11
16 12 Chapter 2. 3 Facilities
17 CHAPTER 3 4 Computing The primary compute capability for LSST is a computer cluster providing a large yet flexible computational resource. The cluster design was chosen both for its favourable cost and its flexibility to allow for design/requirement changes throughout the life of the project. The computational build-out begins in 2018 continues on an annual basis throughout Operations. The replacement policy eventually reduces the node count in 2025 and beyond via more powerful nodes (see ). show the corresponding purchases by year. Figure 3.1: The number of compute nodes on-the-floor over the survey period. The sizing of the cluster is based on proven, sustained application performance and projections for hardware performance improvements. The cluster will utilize the GPFS storage described in the next section and the high-speed InfiniBand network will have bridge devices connecting it to the external network providing limited visibility of the compute nodes to the full outside network. 13
18 System reliability will be achieved in multiple ways. The software design will be tolerant to the loss of a compute node and the management system will detect and remove failing hardware from the compute pool automatically. The management infrastructure will either include high-reliability in the hardware or provide redundancy in the case of failure. The core network will include highly reliable switches utilizing redundancy at the component level (N+1 or N+N power supplies, etc.). The initial server hardware design is planned as a direct extension of the cluster server hardware available today. That is a two-socket node, minimal local secondary storage, and attached to the InfiniBand network. The primary difference between todays hardware and future systems is expected to be higher core counts and faster memory. All compute nodes purchased at the same time will have the same configuration to minimize the number of spares needed and maximize the ability to shift servers around to perform different tasks. The InfiniBand network will utilize the best available technology at the time of the initial deployment, currently estimated to be EDR speeds and plan for a full core network replacement at mid-life of the project. Secondary storage for the cluster will utilize the GPFS file system described in the next section. The system will be managed as a distributed memory cluster using industry best practices in system management and security. Tools such as xcat will be used to provide a highly scalable and efficient management interface to deploy and monitor system resources as needed. xcat is in use today at NCSA for the administration of computational clusters and has been proven to scale to the cluster size required for LSST. xcat provides some monitoring capability which will be augmented with NCSA provided tools such clustat and the Integrated Systems Console (ISC) that have been used successfully on past and current systems at NCSA, including Blue Waters. All system logs will be centrally collected for use in security monitoring and system problem detection and review. Tools including the ISC and Simple Event Correlator (SEC) will continuously analyse the log flow to generate alerts for system issues. These alerts will provide system administrators with immediate notice of issues as well as be used to take automated actions to remove problematic components from production. The management practices will conform to industry best practices including requiring all administrator access to funnel through a central access point and not allow user privilege escalation. Finally the core management server will have regular backups to ensure timely system recovery in the event of a major disaster or system intrusion. The Linux operating system and the rest of the compute node environment will be uniform across the compute infrastructure to simplify management and application deployment. This fits the traditional cluster model well and thus will be the primary usage model. A cloud model will also be available, but is unlikely to be a common usage model in 14 Chapter 3. 4 Computing
19 the primary compute environment. The software environment will be managed for stability and reliability as well as performance. There will be no more than one planned system maintenance outage per month, coordinated with the rest of the project team and conforming to the LSST maintenance schedule and procedures. In addition all changes to the computational environment will be tracked via a change control process and approved prior to implementation with appropriate reviews by project staff in accordance with LSST policies. Hardware is purchased in the year before it is needed in order to leverage price/performance improvements. A special situation occurs for the Commissioning phase of Construction. In 2018, we acquire and install the computing infrastructure needed to support Commissioning, for which we use the same sizing as that for the first year of Operations. Figure 3.2 shows the requirements on the compute infrastructure driven by the LSST processing. Table 3.1 summarizes the technical infrastructure necessary to meet those requirements. Figure 3.2: The growth of compute requirements over the survey period. Table 3.1: Compute sizing for the Data Management System. Archive Site Base Site Teraflops (sustained) Nodes (1200 hwm) (115 hwm) Cores 45K 180K 7K 10K Memory Bandwidth TB/s 3 6 TB/s 15
20 Figure 3.3: The number of nodes purchased by year over the survey period. 16 Chapter 3. 4 Computing
21 CHAPTER 4 5 Storage Image storage will be controller-based storage in a RAID6 8+2 configuration for protection against individual disk failures. GPFS is the parallel file system. NCSA s current hardware model for the GPFS environment is using building blocks of commodity servers and disk. If more disk capacity or performance is required, then hardware can be added to the configuration to accommodate those needs. There are servers with internal SAS disks for metadata, SAS disk controllers, and disk enclosures using todays 4TB SATA drives. NCSA is utilizing the fast SAS internal drives for metadata needs, and using GPFS metadata replication across servers for data integrity and fault tolerance. The controller and disk enclosure is in the first disk unit for the GPFS NSD server. The other two disk enclosures add additional capacity but not performance (See Figure 4.1). The servers are best deployed as sister pairs. They are both active NSDs, but have metadata mirrors between the two, thus eliminating the single point of failure. If one fails, there would be a slight performance degradation, but the data is still readily available from the secondary server. Currently the GPFS environment at NCSA is connected to the clusters over Ethernet, but in the case of the LSST it s just as easy to integrate the GPFS into the compute cluster and use Infiniband or some other low latency technology for data environments within a cluster. NCSA is managing two clusters with GPFS filesystems that exact way today. Table 4.1: Image file storage sizing for the Data Management System. Archive Site Base Site Capacity PB PB Drives (1700 hwm) (1140 hwm) Disk Bandwidth GB/s GB/s Table 4.1 summarizes the LSST storage infrastructure for storing and retrieving image and other file-based data. GPFS was chosen as the baseline for the parallel filesystem implementation based upon the following considerations: NCSA has and will continue to have deep expertise in GPFS and HPSS. NCSA is conducting extensive scaling tests with GPFS and any potential problems that emerge at high loads will be solved by the time LSST going into Operations. LSST gets special pricing due to the University of Illinois campus licensing agreement with IBM. These prices are quite favorable and even at the highest rates are lower than NCSA can currently get for equivalent Lustre service. NCSA provides level 1 support for all UIUC campus licenses under the site licensing agreement. Choice of parallel filesystem implementation is transparent to users of LSST. 17
22 Figure 4.1: GPFS Storage Infrastructure. 18 Chapter 4. 5 Storage
23 CHAPTER 5 6 Mass Storage The mass storage system will be HPSS. The GPFS-HPSS Interface (GHI) is used to create a hierarchical storage system. The HPSS system is comprised of core servers and movers. The core servers is where the metadata and process control takes place. The core servers has its own HA environment and failover between the two servers. It has a DB2 database that contains all the data with all the associated files within the HPSS system. The 2 nd component is the movers. This is where the hardware sits for writing the data to disk and tape. The performance of the data being written to disk and tape are directly proportional to the number of movers and the amount of data that can be written by them. NCSA s deployment of HPSS for the NCSA Blue Waters archive are the core servers being 64 core machines with large data disk arrays and the mover systems as Dell 720 machines each with a portion of a disk cache and 8 fiberchannel-attached tape drives. The 720 machines have two 40GigE cards for data transfer, a FC card for the direct attached tape drives, and a IB card for the disk cache attached. All client interaction (meaning both processing and people) is with the single GPFS namespace. This is due to GHI which captures the request for any data that is not resident in GPFS and is in HPSS and fetches the data from HPSS on behalf of the user. All client interaction no longer is required to know the data location. It will be found and brought locally into client disk cache. The mass storage system at the Archive Site will write data to dual or RAIT tapes. The Base Site will write a single copy thus being a disaster recovery site. There will be a technology refresh at Year 5 of LSST Operations, when a new tape drive environment will be purchased to replace the existing library equipment, and the library system will be upgraded. Table 5.1 captures the requirements and sizing of the mass storage system. Table 5.1: Capacities and sizing of the Mass Storage System. Tape Storage Archive Site Base Site Capacity PB PB Near-line Tapes Tape Bandwidth GB/s GB/s Capacity PB N/A Offsite Tapes N/A Tape Bandwidth GB/s N/A 19
24 20 Chapter 5. 6 Mass Storage
25 CHAPTER 6 7 Databases The relational database catalogs are implemented with qserv, an approach similar to the map-reduce approach in architecture, but applied to processing sql queries. The database storage is provided via local disk drives within the database servers themselves. See Document for additional information regarding the database architecture. There will be a large number of database worker nodes, each with its own local storage. Figure 6.1 shows the number of worker nodes by year, as well as the number of drives per node, the amount of storage per node, and the total number of disk drives in the system. A breakdown of how that storage is used is provided in Figure 6.2. There are two identical instances of the qserv database environment at the two DMS Data Access Centers: The U.S. Data Access Center at NCSA, and the Chilean Data Access Center in La Serena. Figure 6.1: The number of database nodes on-the-floor over the survey period. Table 6.1 and Table 6.2 summarize the infrastructure associated with supporting the QServ databases. 21
26 Figure 6.2: L2 database disk storage, single site. Table 6.1: Database worker nodes in the Data Management System. Database Archive Site Base Site Teraflops (sustained) Nodes (360 hwm) (340 hwm) Table 6.2: Database sizing for the Data Management System. Database Archive Site Base Site Capacity PB PB Drives (3000 hwm) (2200 hwm) Disk Bandwidth GB/s GB/s 22 Chapter 6. 7 Databases
27 CHAPTER 7 8 Additional Support Servers There are a number of additional support servers in the LSST DM computing environment. They include: User Data Access - Login Nodes, Web Portals Science User Interface and Image Access Servers VOEvent Brokers Pipeline Support - Condor, ActiveMQ Brokers Cluster Management, Image Deployment Data Management Control System (DMCS) Servers, including Intersite Data Transfer Network Security Servers (NIDS) Logging Collecting and Analyzing System Logs L3 Allocations Support 23
28 24 Chapter 7. 8 Additional Support Servers
29 CHAPTER 8 9 Cluster Interconnect and Local Networking The local network technologies will be a combination of 10GigE and InfiniBand. 10GigE will be used for the external network interfaces (i.e. external to the DM site), user access servers and services (e.g. web portals, VOEvent servers), mass storage (due to technical limitations), and the Long Haul Network (see the next section). 10GigE is ubiquitous for these uses and is a familiar and known technology. InfiniBand will be used as the cluster interconnect for intra-node communication within the compute cluster, as well as to the database servers. It will also be the storage fabric for the image data. InfiniBand provides the low-latency communication we need at the Base Site for the MPI-based alert generation processing to meet the 60-second latency requirements, as well as for the storage I/O performance we need at the Archive Site for Data Release Production. By using InfiniBand in this way, we can avoid buying, implementing, and supporting the more expensive Fibre Channel for the storage fabric. Figure 8.1: Interconnect Trends Src: Scientific Computing World. Issue
30 26 Chapter 8. 9 Cluster Interconnect and Local Networking
31 CHAPTER 9 10 Long Haul Network The communication link between Summit and Base will be 200 Gbps. The network between the Base Site in La Serena, and the Archive Site in Champaign, IL, will support 10 Gbps minimum, 40 Gbps during the night hours, and 80 Gbps burst capability in the event we have a service interruption and need to catch up. The key features of the network plan are: Mountain summit Base is only new fiber, 200 Gbps capacity Inter-site Long-Haul links on existing fiber LSST is leveraging and driving US - Chile long-haul network expansion Capacity growth supports construction and commissioning 1 Gb/s 2011, 3 Gb/s 2018, Gb/s 2019 Equipment is available today at budgeted cost Additional information can be found in the Network Design Document, LSE-78. Figure 9.2 and Figure 9.3 depict the nightly and non-nightly data flows, respectively, over the LSST international network. 27
32 Figure 9.1: The LSST Long Haul Network. Figure 9.2: The Nightly Data Flows over the LSST International Network. 28 Chapter Long Haul Network
33 Figure 9.3: The Non-Nightly Data Flows over the LSST International Network. 29
34 30 Chapter Long Haul Network
35 CHAPTER Policies A just-in-time approach for purchasing hardware is used to leverage the fact that hardware prices get cheaper over time. This also allows for the use of the latest features of the hardware if valuable to the project. We buy in the fiscal year before the need occurs so that the infrastructure is installed, configured, tested, and ready to go when needed. There is also a ramp up of the initial computing infrastructure for the Commissioning phase of Construction. Shown in this section are various polices that we implement for the DM computing infrastructure. Additional supporting discussion is contained within document LDM Replacement Policy Compute Nodes 5 Years Disk Drives 3 Years Tape Media 5 Years Tape Drives 3 Years Tape Library System Once at Year Storage Overheads RAID 20% Filesystem 10% Spares (hardware failures) Compute Nodes Disk Drives Tape Media 3% of nodes 3% of drives 3% of tapes Extra Capacity Disk Tape 10% of TB 10% of TB 31
36 32 Chapter Policies
37 CHAPTER Disaster Recovery Mass storage is used at both sites to ensure the safe keeping of data products. At the Archive Site, the mass storage system will write two copies of all data to different media. One set of media stays in the tape library for later recall as needed. The second copy is transported off-site. This protects against both media failures (e.g. bad tapes) and loss of the facility itself. The Base Site will write a single copy of data to tape, which remains near-line in the tape library system. Either Site can be the source of data for recovery of the other Site. 33
38 34 Chapter Disaster Recovery
39 CHAPTER CyberSecurity LSST has an open data policy. The primary data deliverables of LSST Data Management are made available to the authorized users without any proprietary period. As a result, the central considerations are when applying security policies are not about the theft of L1 and L2 data. The main considerations are: Data Protection Data Integrity Misuse of Facility L3 Community Data We leverage best practices to ensure a secure computing environment. This includes monitoring such as the use of intrusion detection systems, partitioning of resources such as segregating the L3 compute nodes from the core DM processing nodes, and limiting the scope of authorizations to only that which is needed. Refer to LSE-99 for additional information. This is a LSST system-wide document, not just DM, as cybersecurity reaches across to all of the LSST subsystems. 35
40 36 Chapter CyberSecurity
41 CHAPTER Change Record Version Date Description Owner 1.0 7/13/2011 Initial version as an assembled document; previous material was distributed. Mike Freemon, Jeff Kantor /9/2013 Updated for Final Design Review Mike Freemon, Jeff Kantor /11/2013 TCT approved R Allsman 37
Introduction to LSST Data Management. Jeffrey Kantor Data Management Project Manager
Introduction to LSST Data Management Jeffrey Kantor Data Management Project Manager LSST Data Management Principal Responsibilities Archive Raw Data: Receive the incoming stream of images that the Camera
More informationHigh Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
More informationCluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
More informationNew Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
More informationPerformance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
More informationSymantec NetBackup 5220
A single-vendor enterprise backup appliance that installs in minutes Data Sheet: Data Protection Overview is a single-vendor enterprise backup appliance that installs in minutes, with expandable storage
More informationLeveraging Virtualization for Disaster Recovery in Your Growing Business
Leveraging Virtualization for Disaster Recovery in Your Growing Business Contents What is Disaster Recovery?..................................... 2 Leveraging Virtualization to Significantly Improve Disaster
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationImplementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
More informationComparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
More informationScala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
More informationGPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
More informationUltra-Scalable Storage Provides Low Cost Virtualization Solutions
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization
More informationIBM System Storage DS5020 Express
IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited
More informationBig data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
More informationHighly-Available Distributed Storage. UF HPC Center Research Computing University of Florida
Highly-Available Distributed Storage UF HPC Center Research Computing University of Florida Storage is Boring Slow, troublesome, albatross around the neck of high-performance computing UF Research Computing
More informationBackup and Recovery Solutions for Exadata. Ľubomír Vaňo Principal Sales Consultant
Backup and Recovery Solutions for Exadata Ľubomír Vaňo Principal Sales Consultant Fundamental Backup and Recovery Data doesn t exist in most organizations until the rule of 3 is complete: Different Media
More informationMicrosoft SQL Server 2005 on Windows Server 2003
EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference
More information<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures
1 Refreshing Your Data Protection Environment with Next-Generation Architectures Dale Rhine, Principal Sales Consultant Kelly Boeckman, Product Marketing Analyst Program Agenda Storage
More informationIBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
More informationWhite Paper. Low Cost High Availability Clustering for the Enterprise. Jointly published by Winchester Systems Inc. and Red Hat Inc.
White Paper Low Cost High Availability Clustering for the Enterprise Jointly published by Winchester Systems Inc. and Red Hat Inc. Linux Clustering Moves Into the Enterprise Mention clustering and Linux
More informationOptimizing Large Arrays with StoneFly Storage Concentrators
Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time
More informationArchival Storage At LANL Past, Present and Future
Archival Storage At LANL Past, Present and Future Danny Cook Los Alamos National Laboratory dpc@lanl.gov Salishan Conference on High Performance Computing April 24-27 2006 LA-UR-06-0977 Main points of
More informationSMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
More informationTraditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking
Network Storage for Business Continuity and Disaster Recovery and Home Media White Paper Abstract Network storage is a complex IT discipline that includes a multitude of concepts and technologies, like
More informationDAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization
DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization New Drivers in Information Storage Data is unquestionably the lifeblood of today s digital organization. Storage solutions remain
More information(Scale Out NAS System)
For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages
More informationViolin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
More informationPADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
More informationDriving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
More informationDesigning a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
More informationSAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
More informationThe safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5
Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach
More informationReducing Storage TCO With Private Cloud Storage
Prepared by: Colm Keegan, Senior Analyst Prepared: October 2014 With the burgeoning growth of data, many legacy storage systems simply struggle to keep the total cost of ownership (TCO) in check. This
More informationBackup and Recovery Solutions for Exadata. Cor Beumer Storage Sales Specialist Oracle Nederland
Backup and Recovery Solutions for Exadata Cor Beumer Storage Sales Specialist Oracle Nederland Recovery Point and Recovery Time Wks Days Hrs Mins Secs Secs Mins Hrs Days Wks Data Loss (Recovery Point Objective)
More informationUsing High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
More informationStorage Solutions to Maximize Success in VDI Environments
Storage Solutions to Maximize Success in VDI Environments Contents Introduction: Why VDI?. 1 VDI Challenges. 2 Storage Solutions Optimized for VDI. 3 Conclusion. 6 Brought to you compliments of: Introduction:
More informationDell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family
Dell MD Family Modular storage The Dell MD storage family Dell MD Family Simplifying IT The MD Family simplifies IT by optimizing your data storage architecture and ensuring the availability of your data.
More informationRFP - Equipment for the Replication of Critical Systems at Bank of Mauritius Tower and at Disaster Recovery Site. 06 March 2014
RFP - Equipment for the Replication of Critical Systems at Bank of Mauritius Tower and at Disaster Recovery Site Response to Queries: 06 March 2014 (1) Please specify the number of drives required in the
More informationImplementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
More informationThe Methodology Behind the Dell SQL Server Advisor Tool
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
More informationIntroduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
More informationAchieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
More informationVERITAS Business Solutions. for DB2
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
More informationGPFS und HPSS am HLRS
GPFS und HPSS am HLRS Peter W. Haas Archivierung im Bereich Höchstleistungsrechner Swisstopo, Bern 3. Juli 2009 1 High Performance Computing Center Stuttgart Table of Contents 1. What are GPFS and HPSS
More informationMANAGED DATABASE SOLUTIONS
Page 0 2015 SOLUTION BRIEF MANAGED DATABASE SOLUTIONS NET ACCESS LLC 9 Wing Drive Cedar Knolls, NJ 07927 www.nac.net Page 1 Table of Contents 1. Introduction... 2 2. Net Access Managed Services Solution
More informationTechnology Insight Series
HP s Information Supply Chain Optimizing Information, Data and Storage for Business Value John Webster August, 2011 Technology Insight Series Evaluator Group Copyright 2011 Evaluator Group, Inc. All rights
More informationIBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.
IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise
More informationOracle Maximum Availability Architecture with Exadata Database Machine. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska
Oracle Maximum Availability Architecture with Exadata Database Machine Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska MAA is Oracle s Availability Blueprint Oracle s MAA is a best practices
More informationIBM Storwize Rapid Application Storage solutions
IBM Storwize Rapid Application Storage solutions Efficient, integrated, pretested and powerful solutions to accelerate deployment and return on investment. Highlights Improve disk utilization by up to
More informationThe Benefits of Virtualizing
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
More informationOnline Storage Replacement Strategy/Solution
I. Current Storage Environment Online Storage Replacement Strategy/Solution ISS currently maintains a substantial online storage infrastructure that provides centralized network-accessible storage for
More informationIBM Global Technology Services November 2009. Successfully implementing a private storage cloud to help reduce total cost of ownership
IBM Global Technology Services November 2009 Successfully implementing a private storage cloud to help reduce total cost of ownership Page 2 Contents 2 Executive summary 3 What is a storage cloud? 3 A
More informationData storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
More informationW H I T E P A P E R T h e C r i t i c a l N e e d t o P r o t e c t M a i n f r a m e B u s i n e s s - C r i t i c a l A p p l i c a t i o n s
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R T h e C r i t i c a l N e e d t o P r o t e c t M a i n f r a m e B u s i n e
More informationMEDIAROOM. Products Hosting Infrastructure Documentation. Introduction. Hosting Facility Overview
MEDIAROOM Products Hosting Infrastructure Documentation Introduction The purpose of this document is to provide an overview of the hosting infrastructure used for our line of hosted Web products and provide
More informationWHITE PAPER BRENT WELCH NOVEMBER
BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5
More informationPrivate cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
More informationBuilding Storage Service in a Private Cloud
Building Storage Service in a Private Cloud Sateesh Potturu & Deepak Vasudevan Wipro Technologies Abstract Storage in a private cloud is the storage that sits within a particular enterprise security domain
More informationHigh Performance Computing (HPC) Solutions in High Density Data Centers
EXECUTIVE REPORT High Performance Computing (HPC) Solutions in High Density Data Centers How s Houston West data center campus delivers the highest density solutions to customers Overview With the ever-increasing
More informationProtect Data... in the Cloud
QUASICOM Private Cloud Backups with ExaGrid Deduplication Disk Arrays Martin Lui Senior Solution Consultant Quasicom Systems Limited Protect Data...... in the Cloud 1 Mobile Computing Users work with their
More informationVERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
More informationEMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
More informationUsing EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
More informationOracle Database Backup Service. Secure Backup in the Oracle Cloud
Oracle Database Backup Service Secure Backup in the Oracle Cloud Today s organizations are increasingly adopting cloud-based IT solutions and migrating on-premises workloads to public clouds. The motivation
More informationOVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 sales@cepoint.com Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
More informationBusiness-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management
Business-centric Storage for small and medium-sized enterprises How DX powered by Intel Xeon processors improves data management DX Online Storage Family Architecture DX60 S2 DX100 S3 DX200 S3 Flexible
More informationMaurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
More informationIBM System x GPFS Storage Server
IBM System x GPFS Storage Crispin Keable Technical Computing Architect 1 IBM Technical Computing comprehensive portfolio uniquely addresses supercomputing and mainstream client needs Technical Computing
More informationNational Servers Program Implementation Plan National eresearch Collaboration Infrastructure Project
National Servers Program Implementation Plan National eresearch Collaboration Infrastructure Project University of Melbourne June 2010 1 OVERVIEW 2 2 POLICY DEVELOPMENT WITH THE SECTOR 3 3 SERVICE AND
More informationUnisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise
Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise Introducing Unisys All in One software based weather platform designed to reduce server space, streamline operations, consolidate
More informationConcepts Introduced in Chapter 6. Warehouse-Scale Computers. Important Design Factors for WSCs. Programming Models for WSCs
Concepts Introduced in Chapter 6 Warehouse-Scale Computers introduction to warehouse-scale computing programming models infrastructure and costs cloud computing A cluster is a collection of desktop computers
More informationDELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution
DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution Tested with: ESRP Storage Version 3.0 Tested Date: Content DELL TM PowerEdge TM T610... 1 500 Mailbox Resiliency
More informationSymantec NetBackup 5000 Appliance Series
A turnkey, end-to-end, global deduplication solution for the enterprise. Data Sheet: Data Protection Overview Symantec NetBackup 5000 series offers your organization a content aware, end-to-end, and global
More informationProtecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering
Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution Database Solutions Engineering By Subhashini Prem and Leena Kushwaha Dell Product Group March 2009 THIS WHITE PAPER IS FOR INFORMATIONAL
More informationWestek Technology Snapshot and HA iscsi Replication Suite
Westek Technology Snapshot and HA iscsi Replication Suite Westek s Power iscsi models have feature options to provide both time stamped snapshots of your data; and real time block level data replication
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More informationArchitecting a High Performance Storage System
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to
More informationHADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director brett.weninger@adurant.com Dave Smelker, Managing Principal dave.smelker@adurant.com
More informationLS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
More informationMEMORANDUM OF AGREEMENT BETWEEN THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOIS AND THE ASSOCIATION OF UNIVERSITIES FOR RESEARCH IN ASTRONOMY.
Memorandum of Agreement between The Board of Trustees of the University of Illinois (on behalf of the National Center for Supercomputing Applications NCSA) and the Association of Universities for Research
More informationLarge File System Backup NERSC Global File System Experience
Large File System Backup NERSC Global File System Experience M. Andrews, J. Hick, W. Kramer, A. Mokhtarani National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory
More informationUnitrends Recovery-Series: Addressing Enterprise-Class Data Protection
Solution Brief Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection 2 Unitrends has leveraged over 20 years of experience in understanding ever-changing data protection challenges in
More informationHow To Speed Up A Flash Flash Storage System With The Hyperq Memory Router
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com
More informationHow to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
More informationQuantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
More informationInfortrend ESVA Family Enterprise Scalable Virtualized Architecture
Infortrend ESVA Family Enterprise Scalable Virtualized Architecture R Optimized ROI Ensures the most efficient allocation of consolidated capacity and computing power, and meets wide array of service level
More informationBusiness Continuity in Today s Cloud Economy. Balancing regulation and security using Hybrid Cloud while saving your company money.
! Business Continuity in Today s Cloud Economy Balancing regulation and security using Hybrid Cloud while saving your company money.. Business Continuity is an area that every organization, and IT Executive,
More informationOptimizing and Securing an Industrial DCS with VMware
Optimizing and Securing an Industrial DCS with VMware Global Process Automation deploys a new DCS using VMware to create a secure and robust operating environment for operators and engineers. by Doug Clarkin
More informationNET ACCESS VOICE PRIVATE CLOUD
Page 0 2015 SOLUTION BRIEF NET ACCESS VOICE PRIVATE CLOUD A Cloud and Connectivity Solution for Hosted Voice Applications NET ACCESS LLC 9 Wing Drive Cedar Knolls, NJ 07927 www.nac.net Page 1 Table of
More informationHyperQ DR Replication White Paper. The Easy Way to Protect Your Data
HyperQ DR Replication White Paper The Easy Way to Protect Your Data Parsec Labs, LLC 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com
More informationNEC Express Partner Program. Deliver true innovation. Enjoy the rewards.
NEC Express Partner Program Deliver true innovation. Enjoy the rewards. Why should you become an NEC Express Partner? As a value-added reseller, you re under enormous pressure to grow rapidly, control
More informationArchive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration
Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta
More informationIBM Storwize Rapid Application Storage
IBM Storwize Rapid Application Storage Efficient, pretested, integrated and powerful solution to accelerate deployment and return on investment. Highlights Improve disk utilization by up to 30 percent
More informationEMC DATA DOMAIN OPERATING SYSTEM
EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive
More informationDELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
More informationOracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
More informationEMC DATA DOMAIN OPERATING SYSTEM
ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read
More informationEMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
More informationBlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything
BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest
More information