BlueArc s Architecture for NFS v4.1 and pnfs. Delivering Performance Through Standards

Size: px
Start display at page:

Download "BlueArc s Architecture for NFS v4.1 and pnfs. Delivering Performance Through Standards"

Transcription

1 W H I T E P A P E R BlueArc s Architecture for NFS v4.1 and pnfs Delivering Performance Through Standards

2 W H I T E P A P E R Table of Contents Introduction...1 File I/O: Addressing the (New) HPC Bottleneck...1 Parallel File Systems...1 pnfs: An Open Industry-Standard Parallel...2 BlueArc s Commitment to Performance Through Standards...3 BlueArc pnfs...3 BlueArc pnfs Architecture and Approach...3 BlueArc Mercury Hybrid Core Architecture...4 SiliconFS for Robust NFS Performance...5 Designing an Effective NFS Architecture for NFS v4.1 and pnfs...5 General Design Requirements...5 Responding to a Variety of Client Loads...6 Overcoming to the Limitations of Traditional NAS...6 BlueArc NFS v4.1 and pnfs Architecture...7 Architecture Overview...8 Breaking the One-to-One Restriction...9 Achieving True Multidimensional Scalability...9 BlueArc Professional Services: A Complete pnfs Solution Conclusion TOC

3 Introduction The Network File System (NFS) has served the industry well since its introduction and establishment as a standard in 1986, and the standard has continued to evolve to meet the changing needs of an increasingly dynamic industry. As a company dedicated to standards, BlueArc supplies considerable innovation and scalable NFS performance, and has delivered continual world-record performance and line speed to storage. At the same time, larger highperformance computing (HPC) environments have demanded throughput levels that could only be delivered by parallel file systems which were largely proprietary. With the advent of NFS Version 4.1, parallel NFS (pnfs) becomes a truly industry-standard parallel file system a development whole-heartedly embraced by BlueArc. BlueArc is focused on building and selling industry-leading products, solutions, and services that address high-performance file serving at any scale. BlueArc pnfs is the latest incarnation of this scale-right approach, providing an extension for scaling storage while optimizing access to unstructured data through the development of the underlying SiliconFS file system. To this end the BlueArc architecture for NFS v4.1 and pnfs provides the advantages of an industrystandard pnfs implementation while eliminating the limitations of traditional network attached storage (NAS) architecture all supported as a comprehensive commercial offering. This paper provides information on BlueArc s architecture for NFS v4.1 and pnfs. BlueArc technology including the BlueArc Mercury hybrid core architecture and hardware-accelerated SiliconFS filesystem represents an ideal foundation for implementing pnfs. Adding BlueArc pnfs to BlueArc s existing technology portfolio further enhances the company s scale-right capabilities, promoting coexistence and improved infrastructure utilization. This document describes BlueArc s comprehensive architecture for NFS 4.1 and pnfs, and provides high-level examples of its anticipated application. Hardware and software that provide different aspects of the architecture will be delivered over a number of product releases in accordance with planned BlueArc release schedules. File I/O: Addressing the (New) HPC Bottleneck Compute clusters are everywhere, from traditional supercomputing and HPC markets to emerging commercial uses of HPC technology for enterprise uses. Whatever their purpose, clustered applications typically represent significant computational challenges requiring often massive levels of I/O both in terms of data consumption and creation. This trend has only accelerated as cluster technology has evolved with the availability of faster multi-core CPUs, larger interconnected clusters, and GPU-based compute acceleration. The need for storage capacity and throughput is felt most strongly in terms of temporary workspace (scratch) storage for the cluster, but home directories for cluster users are also starting to require bandwidth in the double-digit GB/second range an emerging trend with expected steady growth. As applications have been written (or rewritten) to take specific advantage of cluster resources, potentially thousands of cluster compute nodes can attempt to access storage pools simultaneously. Traditional file systems where bandwidth is limited by the connection to one or a few storage servers in a cluster simply cannot scale to the levels that are required for high-end HPC clusters. Without sufficient available storage throughput, powerful and expensive cluster resources can literally stall, waiting for I/O operations to complete. To address this issue, parallel file systems have evolved to scale bandwidth linearly with the size of the storage infrastructure. Parallel File Systems Parallel file systems support HPC applications by allowing compute nodes to have concurrent read and write access to the same set of files at the same time. Data for a single file is typically striped across multiple storage nodes to provide scalable performance to individual files. A number of competing parallel file systems have existed at the high end of the HPC space for some time. Page 1

4 W H I T E P A P E R Though capable in their own right, these early approaches posed several challenges for wide-spread adoption: All previous parallel file systems were either proprietary or experimental in nature, and not based on industry standards. To function, all of these parallel file systems require that specific client software be installed on every compute node in the cluster, presenting an expensive and time-consuming licensing, installation, and maintenance headache especially for large clusters. No interoperability has been provided between competing proprietary parallel file system solutions presenting a barrier to change, and practically limiting storage innovations to those of a single vendor for early adopters. Moreover, while open source parallel file systems such as Lustre have provided a solution for some, the technology has remained complex, and Lustre deployments have typically required considerable expertise to configure, tune, and operate. Open source ultimately does not equate to industry-standard, but at least Lustre has benefited some who have the skills and time to understand and influence the process. Unfortunately, changing ownership of open-source projects such as Lustre can rapidly distort the nature of related community efforts. pnfs: An Open Industry-Standard Parallel True industry standards allow vendors and customers alike to participate on a on a level playing field, avoiding both vendor lock-in and the complexities and risks of experimental technologies. As a part of the NFS v4.1 standard, pnfs benefits from the robustness of the standards process as well as from considerable private industry expertise and experience along with input from the open source community. A large number of very capable people and organizations have contributed technology and have been involved in the pnfs effort, including many with considerable experience building early proprietary parallel file systems. In the end, competition based on standards results in simplification for end users and better solutions, just as the original NFS efforts have led to a long-lived and thriving ecosystem of interoperable products. pnfs effectively combines the performance and scalability benefits of a parallel file system with the ubiquity of the NFS standard for network file systems. Organizations get increased performance and scalability in storage infrastructure as well as investment protection and the ability to choose and interact with best-of-breed solutions. As with other effective standards, pnfs allows vendors to standardize on protocol, with the ability to innovate through the implementations that support those protocols. Addressing one of the major issues with proprietary parallel file systems, pnfs clients are expected to be integrated and packaged with major operating systems, just as conventional NFS clients are today. pnfs clients are anticipated for major Linux distributions, the Solaris Operating System, and Microsoft environments, eliminating the need to license, install, and separately maintain client-side code on every node in the cluster. As with existing NFS clients, pnfs clients will be supported and tuned by operating system providers, and robust pnfs clients will be expected to work with pnfs solutions from multiple vendors. Client reference implementations are being written as a part of the standardization and testing process. Eventually, the advent of pnfs as a standard will drive the adoption of parallel file systems across a broad range of data-intensive industries and organizations. As these groups converge on a standard parallel file system, the demands on parallel file systems will change as well presenting new opportunities for vendors to innovate and solve unique problems. For example, new types of users and applications will introduce increasingly mixed workloads which will need to be supported. The eventual adoption of parallel file systems by more commercial HPC applications will also require additional enterprise capabilities. For example, enhanced reliability and availability will be necessary to protect valuable data and help ensure that important cluster applications continue to operate. This evolution of the standard is similar to the path taken by the NFS standard over time. Page 2

5 BlueArc s Commitment to Performance Through Standards As an innovator in high-performance high-capacity storage products, BlueArc has a long history of commitment to standards such as NFS. BlueArc has elected not to implement its own proprietary parallel file system, preferring an industry-standard approach to file system access, while continually innovating the underlying architecture of data and metadata storage. BlueArc has been involved in the pnfs standards process and has contributed technology. BlueArc has a history of providing very high-performance standards-based storage solutions, and has maintained considerable NFS performance leadership throughout its history. The BlueArc Mercury hybrid-core architecture and SiliconFS hardware-accelerated file system constitute an ideal foundation for building a high-performance pnfs solution. Given its strong record of providing record-setting performance through NFS, BlueArc is strategically committed to the success of pnfs. Unlike many who will be providing pnfs solutions, BlueArc s focus is not clouded with a competing parallel file system strategy. BlueArc is also not in the business of charging licensing fees for proprietary client-side software, and welcomes openly-available standard pnfs clients. BlueArc pnfs support will be delivered along with current standard protocols as a part of a comprehensive storage strategy allowing users to choose the protocols that best suit their needs. As a part of this commitment, BlueArc has been active in the pnfs standardization activities, and has been actively solving many of the challenges that must be solved in order for pnfs implementations to become commonplace throughout the spectrum of traditional HPC deployments and beyond. Specifically, the innovation present in BlueArc s Mercury hybrid-core architecture and SiliconFS file system position BlueArc well for a highly-competitive pnfs implementation capable of incremental performance and capacity without arbitrary limitations BlueArc pnfs As with other parallel file system implementations, pnfs provides greatly improved throughput to cluster applications by allowing clients to access storage directly, and in parallel. This parallelism is accomplished by separating content data and file metadata. Instead of a single NFS server that processes both data and metadata, pnfs moves the metadata out of the data path by defining separate Metadata Servers and Data Movers. pnfs specifies a standard parallel file system protocol that defines the communication between: Clients and Metadata Servers Clients and Data Movers The communication between Metadata Servers and Data Movers is left as an implementation detail and will differ from vendor to vendor. In this manner, vendors remain free to add value without jeopardizing the functionality and interoperability from the perspective of pnfs clients. BlueArc pnfs Architecture and Approach Beyond support for NFS v4.1 and pnfs protocols, BlueArc s architecture combines the parallel file system performance benefits of pnfs with BlueArc s traditional strengths, and extends these benefits to CIFS and NFS 3.0 clients. Metadata is held on Metadata Servers, including naming, ownership, permissions, location, and file system layout. Given their importance, Metadata Server support clustering to provide essential high availability. Close to linear scalability is achieved by striping the data across a number of Data Movers which may also be clustered for performance or availability. Metadata Servers and Data Movers require different performance characteristics, with Metadata Servers optimized for high levels of IOPS and Data Movers optimized for delivering bandwidth to and from storage. A high-level perspective of BlueArc s pnfs architecture is shown in Figure 1. pnfs clients communicate with the Metadata Server to find out where desired data is located, and then communicate directly, and in parallel, with the Data Movers. Data transfer takes advantage of the Page 3

6 W H I T E P A P E R aggregated bandwidth available to all of the involved Data Movers. Bandwidth performance scales with the performance of the data mover and with the number of Data Movers in the configuration (with no architectural upper limit). Assuming that the Metadata Servers are sufficiently powerful, BlueArc s architecture allows performance and capacity to grow incrementally in an independent fashion: Performance can be scaled by adding more Data Movers to the system. Capacity can be scaled by adding additional Data s to the system pnfs provides support for three storage protocols in the data path, blocks, objects, and files. BlueArc pnfs assumes the files protocol, consistent with a NAS model and with industry trends toward a predominance for unstructured data. pnfs Clients Metadata Metadata Servers (Clustered)... Direct, Parallel Data Paths... Management Data Movers (Clustered) Figure 1: BlueArc s architecture for NFS v4.1 and pnfs. BlueArc Mercury Hybrid Core Architecture The BlueArc Mercury hybrid-core architecture provides massive data path parallelization and serves as the hardware foundation for BlueArc s SiliconFS hardware-based filesystem. BlueArc Mercury currently provides pipelined hardware-accelerated NFS products that offer capacity, throughput, and industry-leading IOPS performance. BlueArc s approach utilizes flexible and cost-efficient field programmable gate arrays (FPGAs) in conjunction with standard multi-core processors allowing the most appropriate processing resource to be used for a given task. For instance, high-speed data movement is a highly repeatable task that is best executed in the FPGAs, with functions such as higher-level protocol handling, out-of-band systems management, and error/exception handling best suited to a flexible multi-core processor. Moving core file system functionality into silicon dramatically improves the performance potential for network storage systems while adhering to established standards for both network and storage system access. FPGAs are even more suitable for accelerating core file system operations, which is the reason for BlueArc s very strong IOPS performance. IOPS scalability is key for providing truly scalable Metadata Servers in a pnfs implementation. BlueArc understands these issues well, and has a strong record as the industry leader in providing scalability in IOPS as demonstrated by the SPECsfs 2008 benchmark results shown in Figure SPEC and SPECsfs are registered trademarks of the Standard Performance Evaluation corporation (SPEC). Please see for the latest results. Page 4

7 Mercury Node 288 Drives Mercury 50 2-Node 144 Drives Mercury Node 144 Drives Mercury 50 1-Node 72 Drives SPECsfs2008 IOPS Figure 2. BlueArc Mercury consistently demonstrates IOPS performance leadership through the SPECsfs2008 benchmark. SiliconFS for Robust NFS Performance SiliconFS is BlueArc s hardware-accelerated file system supported by the Mercury hybrid-core architecture. SiliconFS offers specific advantages in terms of transparent data mobility, simplified management of a tiered storage architecture, providing a global namespace, and dynamic restriping and rebalancing. Specifically, SiliconFS provides: A parallel state machine, translating network protocols to block layouts on disk using an object-based filesystem Open protocols and an open storage ecosystem Robust Enterprise storage management features, Scalability to petabytes of data, millions of files, and many thousands of hosts Optimized metadata management While SiliconFS is unique to BlueArc, the company maintains an open philosophy when it comes to client operating systems, network access protocols, and back-end storage manufacturer choices. Users today are free to choose from NFS 3.0 or CIFS protocols in BlueArc storage solutions, and will be able to optionally select NFS v4.1 and pnfs in the future. Designing an Effective NFS Architecture for NFS v4.1 and pnfs NFS v4.1 provides substantial new technology including pnfs that changes the ways that high-capacity storage infrastructure can be designed and deployed. In designing an architecture for NFS v4.1, BlueArc wanted to take advantage of the opportunities provided by technologies such as pnfs, but also wanted to expand on the capabilities provided by traditional NAS products. General Design Requirements Beyond merely providing an effective pnfs implementation, BlueArc set out to design a complete next-generation NAS architecture that addresses the needs of a NAS implementation in highperformance environments. In order to be effective, NFS v4.1 architecture must provide: Optimum resource utilization. Contemporary storage architectures must provide a good value for the money spent, both in terms of initial acquisition costs as well as through the optimal use of both hardware and software. For example, consistently idle resources (CPUs, disk arrays, cluster nodes, file systems) during high NAS load clearly indicate that storage architecture can be improved. Dynamic load balancing. It is extremely important to distribute the load between NAS resources dynamically. Load should be distributed equitably based on resource performance characteristics and current usage levels. An architecture that allows for dynamic load balancing can cater to many diverse and varied workloads, datasets, and features while requiring relatively inexpensive but optimally used hardware. Load balancing must also take place at multiple levels, including disk subsystems, storage nodes, and the overall NAS system itself. Page 5

8 W H I T E P A P E R Multi-dimensional scalability. NAS platforms need to scale easily in multiple dimensions, horizontally, vertically, and in multiple combinations, an ability BlueArc calls scale-right storage. Performance should grow as more computational hardware or I/O capacity is added to the configuration. The resulting system must also be able to scale independently in terms of performance and capacity. Workload scalability should be available for both single clients and for the workloads generated by multiple clients. Simultaneous support for highly varied workloads should result in good and consistent performance. System management should be as simple as possible, and management too should scale to ensure that the need for additional storage (for example) does not translate to a proportional increase in IT administration. Low cost, easy and incremental expandability. A large initial investment is a barrier to many organizations that need to start small and scale quickly to address growing needs. Deployments need to be able to start with a scalable low-cost offering and keep adding to the storage platform as requirements change. The architecture must be able to scale incrementally in terms of both performance and capacity. Responding to a Variety of Client Loads. Beyond these general needs, effective parallel file system implementations must be able to respond effectively to a variety of client loads. Most large storage systems are inevitably shared by multiple users running diverse applications that generate a wide range of storage load types. Providing good performance in the face of variable storage load types is one of the technical challenges in building high-performance parallel file systems. File system loads can assume a number of forms, including:. Across File System Load is generated by many clients simultaneously operating on different exported file systems. File System Load is generated by many clients operating in parallel on different objects of the same exported file system. File System Object Load is generated when many clients send requests toward the file system object (file) within a particular file system. Of course, real-life client loads can be any combination of the loads described above. In particular, it is important to note that different clients can collide on the same exported file system or file system object. While read caching can provide some relief, most traditional NAS architectures do not scale particularly well in such circumstances, since machine and storage resources can be left under-utilized if all of the clients are working on the same file system. Overcoming to the Limitations of Traditional NAS Modern NAS solutions are comprised of multi-layered file systems, some visible to the end user, and some reserved as private implementation details. A number of constructs are useful in describing NAS architecture. An Implemented File System (IFS) represents a way to organize, access, and manipulate files that is typically hidden from end users/clients. BlueArc SiliconFS is an example of an IFS. An Exported File System (EFS) provides a file system that an be exported to a user, often through an industry-standard protocol such as NFS. An Abstract File System (AFS) represents a collection of exported file systems that have been exported to the user. One or more EFSs are combined under a united name space to construct and AFS. NAS architecture can introduce several AFSs and an AFS can serve different access protocols such as NFS, CIFS, etc. Typical NAS architecture has implied a one-to-one relationship between implemented file systems and exported file systems, as illustrated in Figure 3. A client sees the storage as an AFS that supports a number of protocols. For example, several exported file systems could be united under the same name space using the BlueArc Cluster Name Space (CNS). Each of the implemented file systems is very tightly coupled (in a one-to-one relationship) to an exported file system that supports a Page 6

9 protocol-independent API. While this one-to-one relationship has provided simplicity, it can prevent flexibility and scalability in the resulting NAS system because the implementation of the file system cannot adapt dynamically in response to changing loads or constraints. Clients: NFS, CIFS FS Abstraction and Common Name Space Layer Directory 1 Directory 2 Directory 3 FS Export Layer First Exported FS EFS1 Second Exported FS EFS2 Third Exported FS EFS3 FS Implementation Layer Software and Hardware Implementation of EFS1 IFS1 Software and Hardware Implementation of EFS2 IFS2 Software and Hardware Implementation of EFS3 IFS3 NAS Figure 3. Typical NAS architecture: An abstract file system is constructed from a collection of exported file Page 7 systems that have a one-to-one relationship with internal implemented file systems. BlueArc NFS v4.1 and pnfs Architecture BlueArc s approach to providing a complete NFS v4.1 NAS architecture breaks the traditional one-to-one correspondence between the exported filesystem and its implementation. In BlueArc s architecture, that correspondence is replaced with a many-to-many relationship where the exported file system corresponds to several implemented file systems, and each implemented file system generally corresponds to several exported file systems. As a result of this design, all of the resources of all of the implemented file systems are available when processing client requests to a particular exported file system, and new resources can be added dynamically. This many-to-many mapping enables new possibilities in terms of optimal resource utilization, parallel processing, and load distribution. In general, this approach allows better scalability since another implemented file system can be dynamically added and assigned to one or more existing and/or new exported filesystems to provide additional resources. The scalability in turn allows an inexpensive low-end system to grow into high-performance, high-capacity storage with the incremental addition of components. The result is a system that can adapt easily to different requirements, including price, performance, capacity, and load, all while parallelizing the processing of client tasks in the most appropriate fashion. Another key aspect of this architectural approach is that it breaks the coupling between the modules that interpret and keep state for various protocols, as well as the file system that stores the objects accessed using these protocols. Protocol modules can be either local or remote, allowing scalability by adding additional hardware resources as required. This comprehensive model essentially creates a tiered (or layered) storage software stack, with each layer encapsulating logically-related set of functionality.

10 W H I T E P A P E R Architecture Overview Figure 4 provides a high-level perspective of the modular BlueArc NFS v4.1 and pnfs architecture, highlighting the distribution of the anticipated modules that comprise the architecture. Leveraging the strengths of BlueArc s technology, the architecture brings innovation to the pnfs standard while delivering key BlueArc values. Clients/Tasks CIFS Clients NFS v3.0 Clients Other Clients/Tasks NFS v4.1 Clients NAS Platform Converters CIFS Server #1 CIFS Server #p NFS v3.0 Server #1 NFSV v3.0 Server #q Other Servers NFS v4.1 Client NFS v4.1 Client NFS v4.1 Client NFS v4.1 Client NFS v4.1 Client Server Metadata Server IMDFS Data Server #1 Data Server #2 Data Server #n IDFS #1 IDFS #2 IDFS #n Figure 4. BlueArc s approach to unified storage architecture breaks the traditional one-to-one mapping between exported file systems and their implementation. BlueArc s comprehensive architecture is comprised of several building blocks: The NAS Platform encompasses one or many physical storage systems and their hardware and software resources. The Platform layer is the only level of the system that is visible to external clients. The NAS Server is the core tier that implements NFS v4.1 functionality, including pnfs. NFS v4.1 is the only protocol that clients use to communicate with the NAS Server. NFS v4.1 clients communicate directly with the NAS Server, and the server may run across many physical systems. Converters allow access to the Server from many different protocols and management tasks. NFS v3.0, CIFS, and other clients and tasks interact with Converters that communicate via NFS v4.1 protocol with the NAS Server. This capability makes the scalability of pnfs accessible to non-parallel protocols such as NFS v3.0 and CIFS. Page 8

11 As a pnfs implementation, the NAS Server consists of two kinds of implemented file systems Metadata File Systems (MDFS) and Data File Systems (DFS). Each exported file system is comprised of at least one MDFS and one or more DFS s. The MDFS maintains the directories hierarchy, client file attributes, and metadata to locate client file data. The DFS contains files that correspond to a range of the client file data. A client file system is thus distributed between the MDFS and potentially many DFS s. Breaking the One-to-One Restriction Figure 5 represents a simple logical example of how BlueArc s architecture breaks the one-to-one correspondence between the implemented and exported file systems. Clients: NFS, CIFS Abstraction and Common Name Space Layer Directory 1 Directory 2 Directory 3 Export Layer First Exported (EFS1) Second Exported (EFS2) Third Exported (EFS3) MDFS for EFS1 MDFS for EFS2 MDFS for EFS3 Implementation Layer (Software and Hardware) (IMDFS1) (IMDFS2) (IMDFS3) DFS for EFS1 DFS for EFS1 DFS and EFS2 DFS and EFS3 (IDFS1_1) (IDFS1_2) (IDFS2_1) (IDFS3_1) NAS Figure 5. A simple representation of three exported file systems. The figure illustrates that each of the three exported file systems is implemented with one Metadata and one or more Data s. For example, Directory 1 is represented by an exported file system (ESF1) that consists of one Metadata (IMDFS1) and two Data s (IDFS1_1) and IDFS1_2). Additional performance scalability can be provided by adding additional Data s while additional storage capacity can be added independently. Achieving True Multidimensional Scalability BlueArc s pnfs architecture accommodates considerable flexibility. The example shown in Figure 6 extends the many-to-many relationship inherent in the BlueArc s architecture. As shown, components of all of the exported file systems can be distributed across multiple Metadata and Data Servers. In particular, a directory or file could be stored in several Metadata or Data s Page 9

12 W H I T E P A P E R respectively, yielding considerable flexibility. It is this capability that allows BlueArc s unified NAS approach to start small, scale as needed, and rapidly respond to emerging needs. Clients: NFS, CIFS Filesysem Abstraction and Common Name Space Layer Export Layer Implementation Layer Directory 1 Directory 2 Directory 3 First Exported (EFS1) Second Exported (EFS2) Third Exported (EFS3) EFS1 EFS1 EFS2 EFS2 EFS3 EFS1 EFS2 EFS3 EFS1 EFS2 EFS3 EFS3 EFS1 EFS2 NAS S Figure 6. In BlueArc s architecture, a single exported file system may be spread across many implemented file systems and an implemented file system may serve many exported file systems, allowing considerable flexibility and multi-dimensional scalability. In this example, two Metadata Servers are shown along with four Data Movers all deployed on BlueArc Mercury technology. Portions of the various exported file systems are distributed across multiple implemented file systems, depending on the needs of the individual file systems for capacity, performance, or redundancy. For example, Directory 1 is mapped into EFS1, which is internally spread across five physical systems (two Metadata Servers and three Data Movers). Metadata Servers and Data Movers alike can be based on BlueArc Mercury technology. A lowcost Linux-based Data Mover will also be available. The architecture inherits a number of distinct technological advantages from BlueArc Mercury technology, including: Performance and hardware-accelerated scalability including excellent metadata scalability, FPGA and multi-core acceleration, and proven IOPS performance Clustering for both availability and performance Virtualization to allow for the allocation and re-allocation of modules and resources across the platform to provide both scalable performance and ideal utilization Page 10

13 BlueArc Professional Services: A Complete pnfs Solution Beyond its technical advantages, BlueArc pnfs is a comprehensive solution, combining software, hardware and professional services. Whether organizations are moving from a current BlueArc environment or from another vendor s solution, BlueArc Professional Services offers a suite of offerings to help streamline the adoption of BlueArc pnfs. Assessment, design, and architecture. BlueArc professional engineers are available to assess any IT environment and design a solution to maximize productivity. Assessment will investigate the current workflow, capacity, performance, and applications needs within the target environment. Design and architecture will combine these needs with future performance goals to help provide choices around implementation trade-offs, ultimately leading to a balanced solution prioritized against processing goals. This phase of the program goes well beyond sizing processing requirements, instead focusing on the design aspects of ongoing growth, supportability, reliability, performance analysis, and troubleshooting so that all aspects of sustaining the system are addressed. Installation, implementation, and migration. BlueArc Professional Services can provide the onsite services required to support both installation and implementation. Installation services address the preproduction systems staging to ensure that the delivered platform is verified as being operationally ready for deployment. Implementation services build on the initial installation, and begin the process of configuring the system according to the design and architecture needs of the customer environment. This phase initiates much of the software configuration process specific to the feature content and design goals of each customer through example and demonstration. Implementation services can be used to verify the successful achievement of design goals and to address any unplanned onsite integration requirements. Migration services. The purchase of a new storage system often leads to the need for migration services. BlueArc Professional Services can provide planning, data migration, and validation testing support to minimize impact to critical applications. Flexible data migration tools and services enable BlueArc to accommodate a variety of requirements and optimize service delivery. As part of the service BlueArc will assist with consolidating and optimizing new data layout requirements, ensuring that post-transfer accessibility is verified. When needed, BlueArc can also help to ensure that any changes or adjustments to users, groups, security, or network services are working as expected. Education. BlueArc provides administrative training courses that are designed to enable operational excellence and best practices designed for systems administrators within customer accounts. These classes provide hands on experience, lab exercises, and testing to confirm student progress and understanding. BlueArc also offers custom onsite training that can be tailored to individual business and technology environments. BlueArc s education specialists work with each organization to evaluate their application requirements, staffing, facilities, and equipment, tailoring content to specific requirements. Training is optimized to emphasize the content each team needs to be effective. If required, advanced classes are offered to allow customers to become more self-sufficient in terms of performance optimization, supportability, and maintenance. Performance optimization. BlueArc understands how to obtain maximum performance from BlueArc technologies, and can help organizations get the most from their investments in BlueArc pnfs technology. Optimization of performance starts with assessing the existing system and collection of data to allow BlueArc experts to analyze an existing system. This process results in the recommendation of corrective action, better information about systems limits, and interactions and education about the related performance principals affecting the system. As required a thorough explanation is provided to ensure the customer gain a better understanding of the behavior of the system related to their workload performance and growth over time. Page 11

14 W H I T E P A P E R Custom assistance. Custom engagements can be designed to address unique requirements for any organization. A custom statement of work (SOW) can be drafted to combine any of the above services and define services that are unique to a particular customer environment. Custom assistance is often provided for complimentary services regarding backup, replication, disaster recovery, availability, and so on. Conclusion The availability of the pnfs standard as a part of NFS v4.1 represents a key opportunity for both traditional HPC applications, and the many emerging commercial HPC environments that will look toward parallel file systems in the years to come. The ubiquity of an open industry standard combined with the throughput benefits of a parallel file system represent a potent combination. As a high-performance storage vendor who has always been committed to providing performance through open standards, BlueArc will accelerate pnfs as a part of its comprehensive storage strategy. In short, BlueArc is committed to pnfs and it will do for pnfs what it has always done for NFS namely provide an efficient, high-performance, and differentiated implementation. Organizations can start small with cost-efficient Linux Data Movers, upgrading to more powerful Data Movers based on BlueArc Mercury technology as required. Resources can be dynamically assigned and reassigned to meet the performance or capacity needs, while ensuring maximum utilization of resources. BlueArc Mercury s massive hardware-assisted IOPS performance provides considerable head-room for building very large storage clusters, and the ability to cluster BlueArc Mercury nodes provides failover protection and performance scalability. Capacity can be increased by adding a wide range of leading disk storage systems. Perhaps best of all, the BlueArc Mercury architecture for NFS v4.1 and pnfs retains the simple management profile of NAS so that administration stays manageable even as the system scales to multi-petabyte deployments. Page 12

15 Page 13

16 W H I T E P A P E R About BlueArc BlueArc is a leading provider of high performance unified network storage systems to enterprise markets, as well as data intensive markets, such as electronic discovery, entertainment, federal government, higher education, Internet services, oil and gas and life sciences. Our products support both network attached storage, or NAS, and storage area network, or SAN, services on a converged network storage platform. We enable companies to expand the ways they explore, discover, research, create, process and innovate in data-intensive environments. Our products replace complex and performance-limited products with high performance, scalable and easy to use systems capable of handling the most data intensive applications and environments. Further, we believe that our energy efficient design and our products ability to consolidate legacy storage infrastructures, dramatically increases storage utilization rates and reduces our customers total cost of ownership. BlueArc Corporation Corporate Headquarters 50 Rio Robles Drive San Jose, CA t f BlueArc UK Ltd. European Headquarters Queensgate House Cookham Road Bracknell RG12 1RB, United Kingdom t +44 (0) f +44 (0) BlueArc Corporation. All rights reserved. The BlueArc logo is a registered trademark of BlueArc Corporation. 11/10 WP-MADA-00

Building a Successful Strategy To Manage Data Growth

Building a Successful Strategy To Manage Data Growth Building a Successful Strategy To Manage Data Growth Abstract In organizations that have requirements for a minimum of 30 terabytes to multiple petabytes of storage the go to technology for a successful

More information

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest

More information

Application Brief: Using Titan for MS SQL

Application Brief: Using Titan for MS SQL Application Brief: Using Titan for MS Abstract Businesses rely heavily on databases for day-today transactions and for business decision systems. In today s information age, databases form the critical

More information

A High-Performance Storage and Ultra-High-Speed File Transfer Solution

A High-Performance Storage and Ultra-High-Speed File Transfer Solution A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance

More information

IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.

IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand. IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise

More information

Introduction. Scalable File-Serving Using External Storage

Introduction. Scalable File-Serving Using External Storage Software to Simplify and Share SAN Storage Creating Scalable File-Serving Clusters with Microsoft Windows Storage Server 2008 R2 and Sanbolic Melio 2010 White Paper By Andrew Melmed, Director of Enterprise

More information

IPRO ecapture Performance Report using BlueArc Titan Network Storage System

IPRO ecapture Performance Report using BlueArc Titan Network Storage System IPRO ecapture Performance Report using BlueArc Titan Network Storage System Allen Yen, BlueArc Corp Jesse Abrahams, IPRO Tech, Inc Introduction IPRO ecapture is an e-discovery application designed to handle

More information

WHITE PAPER. www.fusionstorm.com. Get Ready for Big Data:

WHITE PAPER. www.fusionstorm.com. Get Ready for Big Data: WHitE PaPER: Easing the Way to the cloud: 1 WHITE PAPER Get Ready for Big Data: How Scale-Out NaS Delivers the Scalability, Performance, Resilience and manageability that Big Data Environments Demand 2

More information

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with

More information

Clustering Windows File Servers for Enterprise Scale and High Availability

Clustering Windows File Servers for Enterprise Scale and High Availability Enabling the Always-On Enterprise Clustering Windows File Servers for Enterprise Scale and High Availability By Andrew Melmed Director of Enterprise Solutions, Sanbolic, Inc. April 2012 Introduction Microsoft

More information

BlueArc The Foundation For Cost-Effective ediscovery

BlueArc The Foundation For Cost-Effective ediscovery S E R V I C E P R O V I D E R S B L U E A R C S O L U T I O N H I G H L I G H T S : Massive computing parallelism delivers the performance to support multiple, varied applications BlueArc The Foundation

More information

Cost-effective E-discovery Starts with Network Storage Solutions

Cost-effective E-discovery Starts with Network Storage Solutions S o l u t i o n P r o f i l e Cost-effective E-discovery Starts with Network Storage Solutions Summary The rules governing e-discovery and e-disclosure continue to grow more stringent, not only in the

More information

The Ultimate in Scale-Out Storage for HPC and Big Data

The Ultimate in Scale-Out Storage for HPC and Big Data Node Inventory Health and Active Filesystem Throughput Monitoring Asset Utilization and Capacity Statistics Manager brings to life powerful, intuitive, context-aware real-time monitoring and proactive

More information

Building Optimized Scale-Out NAS Solutions with Avere and Arista Networks

Building Optimized Scale-Out NAS Solutions with Avere and Arista Networks Building Optimized Scale-Out NAS Solutions with Avere and Arista Networks Record-Breaking Performance in the Industry's Smallest Footprint Avere Systems, Inc. 5000 McKnight Road, Suite 404 Pittsburgh,

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

EXPLORATION TECHNOLOGY REQUIRES A RADICAL CHANGE IN DATA ANALYSIS

EXPLORATION TECHNOLOGY REQUIRES A RADICAL CHANGE IN DATA ANALYSIS EXPLORATION TECHNOLOGY REQUIRES A RADICAL CHANGE IN DATA ANALYSIS EMC Isilon solutions for oil and gas EMC PERSPECTIVE TABLE OF CONTENTS INTRODUCTION: THE HUNT FOR MORE RESOURCES... 3 KEEPING PACE WITH

More information

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007 Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements

More information

THE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved.

THE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved. THE EMC ISILON STORY Big Data In The Enterprise 2012 1 Big Data In The Enterprise Isilon Overview Isilon Technology Summary 2 What is Big Data? 3 The Big Data Challenge File Shares 90 and Archives 80 Bioinformatics

More information

Red Hat Storage Server

Red Hat Storage Server Red Hat Storage Server Marcel Hergaarden Solution Architect, Red Hat marcel.hergaarden@redhat.com May 23, 2013 Unstoppable, OpenSource Software-based Storage Solution The Foundation for the Modern Hybrid

More information

Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory

Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory Customer Success Story Los Alamos National Laboratory Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory June 2010 Highlights First Petaflop Supercomputer

More information

New Storage System Solutions

New Storage System Solutions New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

Easier - Faster - Better

Easier - Faster - Better Highest reliability, availability and serviceability ClusterStor gets you productive fast with robust professional service offerings available as part of solution delivery, including quality controlled

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

TheTitan3000 Business Velocity on a New Scale

TheTitan3000 Business Velocity on a New Scale Business Velocity on a New Scale White Paper TheTitan3000 Business Velocity on a New Scale May 2008 Sponsored by: Introduction: Business Velocity on A New Scale It s no secret: The scale and pace of business

More information

Understanding Enterprise NAS

Understanding Enterprise NAS Anjan Dave, Principal Storage Engineer LSI Corporation Author: Anjan Dave, Principal Storage Engineer, LSI Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA

More information

POWER ALL GLOBAL FILE SYSTEM (PGFS)

POWER ALL GLOBAL FILE SYSTEM (PGFS) POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm

More information

EMC s Enterprise Hadoop Solution. By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst

EMC s Enterprise Hadoop Solution. By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst White Paper EMC s Enterprise Hadoop Solution Isilon Scale-out NAS and Greenplum HD By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst February 2012 This ESG White Paper was commissioned

More information

HyperQ Storage Tiering White Paper

HyperQ Storage Tiering White Paper HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com

More information

Driving Data Migration with Intelligent Data Management

Driving Data Migration with Intelligent Data Management F5 White Paper Driving Data Migration with Intelligent Data Management F5 and NetApp are reducing the cost and complexity of managing file storage. F5 ARX complements and enables transparent migration

More information

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights

More information

IBM Global Technology Services November 2009. Successfully implementing a private storage cloud to help reduce total cost of ownership

IBM Global Technology Services November 2009. Successfully implementing a private storage cloud to help reduce total cost of ownership IBM Global Technology Services November 2009 Successfully implementing a private storage cloud to help reduce total cost of ownership Page 2 Contents 2 Executive summary 3 What is a storage cloud? 3 A

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

Introduction to NetApp Infinite Volume

Introduction to NetApp Infinite Volume Technical Report Introduction to NetApp Infinite Volume Sandra Moulton, Reena Gupta, NetApp April 2013 TR-4037 Summary This document provides an overview of NetApp Infinite Volume, a new innovation in

More information

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software White Paper Overview The Micron M500DC SSD was designed after months of close work with major data center service providers and

More information

WHITEPAPER. Network-Attached Storage in the Public Cloud. Introduction. Red Hat Storage for Amazon Web Services

WHITEPAPER. Network-Attached Storage in the Public Cloud. Introduction. Red Hat Storage for Amazon Web Services WHITEPAPER Network-Attached Storage in the Public Cloud Red Hat Storage for Amazon Web Services Introduction Cloud computing represents a major transformation in the way enterprises deliver a wide array

More information

Cisco Unified Data Center

Cisco Unified Data Center Solution Overview Cisco Unified Data Center Simplified, Efficient, and Agile Infrastructure for the Data Center What You Will Learn The data center is critical to the way that IT generates and delivers

More information

Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.

Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved. Object Storage: A Growing Opportunity for Service Providers Prepared for: White Paper 2012 Neovise, LLC. All Rights Reserved. Introduction For service providers, the rise of cloud computing is both a threat

More information

Key Messages of Enterprise Cluster NAS Huawei OceanStor N8500

Key Messages of Enterprise Cluster NAS Huawei OceanStor N8500 Messages of Enterprise Cluster NAS Huawei OceanStor Messages of Enterprise Cluster NAS 1. High performance and high reliability, addressing bid data challenges High performance: In the SPEC benchmark test,

More information

Realizing the True Potential of Software-Defined Storage

Realizing the True Potential of Software-Defined Storage Realizing the True Potential of Software-Defined Storage Who should read this paper Technology leaders, architects, and application owners who are looking at transforming their organization s storage infrastructure

More information

Advanced Core Operating System (ACOS): Experience the Performance

Advanced Core Operating System (ACOS): Experience the Performance WHITE PAPER Advanced Core Operating System (ACOS): Experience the Performance Table of Contents Trends Affecting Application Networking...3 The Era of Multicore...3 Multicore System Design Challenges...3

More information

Get More Scalability and Flexibility for Big Data

Get More Scalability and Flexibility for Big Data Solution Overview LexisNexis High-Performance Computing Cluster Systems Platform Get More Scalability and Flexibility for What You Will Learn Modern enterprises are challenged with the need to store and

More information

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010 Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Data Center Performance Insurance

Data Center Performance Insurance Data Center Performance Insurance How NFS Caching Guarantees Rapid Response Times During Peak Workloads November 2010 2 Saving Millions By Making It Easier And Faster Every year slow data centers and application

More information

Cisco Unified Computing. Optimization Service

Cisco Unified Computing. Optimization Service Improve your unified compute so it remains a competitive resource with the Cisco Unified Computing Optimization Service. Cisco Unified Computing Optimization Service Increase Agility and Performance with

More information

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA

More information

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability

More information

StorageX 7.5 Case Study

StorageX 7.5 Case Study StorageX 7.5 Case Study This document will cover how StorageX 7.5 helps to transform a legacy Microsoft DFS environment into a modern, domain-based DFS environment The Challenge Microsoft has officially

More information

recovery at a fraction of the cost of Oracle RAC

recovery at a fraction of the cost of Oracle RAC Concurrent data access and fast failover for unstructured data and Oracle databases recovery at a fraction of the cost of Oracle RAC Improve application performance and scalability - Get parallel processing

More information

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all.

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCES Symantec understands the shifting needs of the data center and offers NetBackup

More information

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware Best Practices for High Performance NFS Storage with VMware Executive Summary The proliferation of large-scale, centralized pools of processing, networking, and storage resources is driving a virtualization

More information

Software-defined Storage Architecture for Analytics Computing

Software-defined Storage Architecture for Analytics Computing Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture

More information

IBM Enterprise Linux Server

IBM Enterprise Linux Server IBM Systems and Technology Group February 2011 IBM Enterprise Linux Server Impressive simplification with leading scalability, high availability and security Table of Contents Executive Summary...2 Our

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

Relational Databases in the Cloud

Relational Databases in the Cloud Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating

More information

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure White Paper Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure Providing Agile and Efficient Service Delivery for Sustainable Business Advantage What You Will Learn Enterprises

More information

Make the Most of Big Data to Drive Innovation Through Reseach

Make the Most of Big Data to Drive Innovation Through Reseach White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

Storage Systems Performance Testing

Storage Systems Performance Testing Storage Systems Performance Testing Client Overview Our client is one of the world s leading providers of mid-range and high-end storage systems, servers, software and services. Our client applications

More information

BROCADE PERFORMANCE MANAGEMENT SOLUTIONS

BROCADE PERFORMANCE MANAGEMENT SOLUTIONS Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

White Paper. Low Cost High Availability Clustering for the Enterprise. Jointly published by Winchester Systems Inc. and Red Hat Inc.

White Paper. Low Cost High Availability Clustering for the Enterprise. Jointly published by Winchester Systems Inc. and Red Hat Inc. White Paper Low Cost High Availability Clustering for the Enterprise Jointly published by Winchester Systems Inc. and Red Hat Inc. Linux Clustering Moves Into the Enterprise Mention clustering and Linux

More information

How To Backup With Ec Avamar

How To Backup With Ec Avamar BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS

Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS Step-by-step Configuration Guide Table of Contents Scalable File Serving Clusters Using Windows Storage Server Using

More information

Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows

Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows Sponsored by: Prepared by: Eric Slack, Sr. Analyst May 2012 Storage Infrastructures for Big Data Workflows Introduction Big

More information

IBM PureFlex System. The infrastructure system with integrated expertise

IBM PureFlex System. The infrastructure system with integrated expertise IBM PureFlex System The infrastructure system with integrated expertise 2 IBM PureFlex System IT is moving to the strategic center of business Over the last 100 years information technology has moved from

More information

Hitachi NAS Platform and Hitachi Content Platform with ESRI Image

Hitachi NAS Platform and Hitachi Content Platform with ESRI Image W H I T E P A P E R Hitachi NAS Platform and Hitachi Content Platform with ESRI Image Aciduisismodo Extension to ArcGIS Dolore Server Eolore for Dionseq Geographic Uatummy Information Odolorem Systems

More information

Best Practices for Architecting Storage in Virtualized Environments

Best Practices for Architecting Storage in Virtualized Environments Best Practices for Architecting Storage in Virtualized Environments Leverage Advances in Storage Technology to Accelerate Performance, Simplify Management, and Save Money in Your Virtual Server Environment

More information

" " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " "

                                ! WHITE PAPER! The Evolution of High-Performance Computing Storage Architectures in Commercial Environments! Prepared by: Eric Slack, Senior Analyst! May 2014 The Evolution of HPC Storage Architectures

More information

Nexenta Performance Scaling for Speed and Cost

Nexenta Performance Scaling for Speed and Cost Nexenta Performance Scaling for Speed and Cost Key Features Optimize Performance Optimize Performance NexentaStor improves performance for all workloads by adopting commodity components and leveraging

More information

Evolving Datacenter Architectures

Evolving Datacenter Architectures Technology Insight Paper Evolving Datacenter Architectures HP technologies for Cloud ready IT By Russ Fellows January, 2013 Enabling you to make the best technology decisions Evolving Datacenter Architectures

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Maxta Storage Platform Enterprise Storage Re-defined

Maxta Storage Platform Enterprise Storage Re-defined Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged computing,

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

Scale-out NAS Unifies the Technical Enterprise

Scale-out NAS Unifies the Technical Enterprise Scale-out NAS Unifies the Technical Enterprise Panasas Inc. White Paper July 2010 Executive Summary Tremendous effort has been made by IT organizations, and their providers, to make enterprise storage

More information

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance

More information

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely

More information

Symantec NetBackup Appliances

Symantec NetBackup Appliances Symantec NetBackup Appliances Simplifying Backup Operations Geoff Greenlaw Manager, Data Centre Appliances UK & Ireland January 2012 1 Simplifying Your Backups Reduce Costs Minimise Complexity Deliver

More information

Optimizing and Managing File Storage

Optimizing and Managing File Storage W H I T E P A P E R Optimizing and Managing File Storage in Windows Environments A Powerful Solution Based on Microsoft DFS and Brocade Tapestry StorageX. The Microsoft Distributed File System (DFS) is

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu

More information

Reducing Storage TCO With Private Cloud Storage

Reducing Storage TCO With Private Cloud Storage Prepared by: Colm Keegan, Senior Analyst Prepared: October 2014 With the burgeoning growth of data, many legacy storage systems simply struggle to keep the total cost of ownership (TCO) in check. This

More information

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed

More information

WHITE PAPER. www.fusionstorm.com. The Double-Edged Sword of Virtualization:

WHITE PAPER. www.fusionstorm.com. The Double-Edged Sword of Virtualization: WHiTE PaPEr: Easing the Way to the cloud: 1 WHITE PAPER The Double-Edged Sword of Virtualization: Solutions and Strategies for minimizing the challenges and reaping the rewards of Disaster recovery in

More information

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization

More information

an introduction to networked storage

an introduction to networked storage an introduction to networked storage How networked storage can simplify your data management The key differences between SAN, DAS, and NAS The business benefits of networked storage Introduction Historical

More information

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center Solution Overview Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center What You Will Learn The data center infrastructure is critical to the evolution of

More information

COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE

COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE Complementary technologies provide unique advantages over traditional storage architectures Often seen as competing technologies, Storage Area

More information

Microsoft Private Cloud Fast Track

Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease

More information

Introduction to Red Hat Storage. January, 2012

Introduction to Red Hat Storage. January, 2012 Introduction to Red Hat Storage January, 2012 1 Today s Speakers 2 Heather Wellington Tom Trainer Storage Program Marketing Manager Storage Product Marketing Manager Red Hat Acquisition of Gluster What

More information

WHITE PAPER. www.fusionstorm.com. Easing the Way to the Cloud:

WHITE PAPER. www.fusionstorm.com. Easing the Way to the Cloud: WHITE PAPER: Easing the Way to the Cloud: 1 WHITE PAPER Easing the Way to the Cloud: The Value of Using a Reference Architecture in Private Cloud Deployments for Microsoft Applications and Server Platforms

More information

Elastic Private Clouds

Elastic Private Clouds White Paper Elastic Private Clouds Agile, Efficient and Under Your Control 1 Introduction Most businesses want to spend less time and money building and managing IT infrastructure to focus resources on

More information

The Next Evolution in Storage Virtualization Management

The Next Evolution in Storage Virtualization Management The Next Evolution in Storage Virtualization Management Global Storage Virtualization Simplifies Management, Lowers Operational Costs By Hitachi Data Systems July 2014 Contents Executive Summary... 3 Introduction...

More information

TRANSFORM YOUR BUSINESS: BIG DATA AND ANALYTICS WITH VCE AND EMC

TRANSFORM YOUR BUSINESS: BIG DATA AND ANALYTICS WITH VCE AND EMC TRANSFORM YOUR BUSINESS: BIG DATA AND ANALYTICS WITH VCE AND EMC Vision Big data and analytic initiatives within enterprises have been rapidly maturing from experimental efforts to production-ready deployments.

More information

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Clear the way for new business opportunities. Unlock the power of data. Overcoming storage limitations Unpredictable data growth

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information