Lustre: A Scalable, High-Performance File System Cluster File Systems, Inc.
|
|
|
- Myles Fowler
- 10 years ago
- Views:
Transcription
1 Lustre: A Scalable, High-Performance File System Cluster File Systems, Inc. Abstract: Today's network-oriented computing environments require high-performance, network-aware file systems that can satisfy both the data storage requirements of individual systems and the data sharing requirements of workgroups and clusters of cooperative systems. The Lustre File System, an open source, high-performance file system from Cluster File Systems, Inc., is a distributed file system that eliminates the performance, availability, and scalability problems that are present in many traditional distributed file systems. Lustre is a highly modular next generation storage architecture that combines established, open standards, the Linux operating system, and innovative protocols into a reliable, network-neutral data storage and retrieval solution. Lustre provides high I/O throughput in clusters and shared-data environments and also provides independence from the location of data on the physical storage, protection from single points of failure, and fast recovery from cluster reconfiguration and server or network outages. 1. Overview Network-centric computing environments demand reliable, high-performance storage systems that properly authenticated clients can depend on for data storage and delivery. Simple cooperative computing environments such as enterprise networks typically satisfy these requirements using distributed file systems based on a standard client/server model. Distributed file systems such as NFS and AFS have been successful in a variety of enterprise scenarios but do not satisfy the requirements of today's high-performance computing environments. The Lustre distributed file system provides significant performance and scalability advantages over existing distributed file systems. Lustre leverages the power and flexibility of the Open Source Linux operating system to provide a truly modern POSIX compliant file system that satisfies the requirements of large clusters today, while providing a clear design and extension path for even larger environments tomorrow. The name "Lustre" is an amalgam of the terms "Linux" and "Clusters". Distributed file systems have well-known advantages. They decouple computational and storage resources, enabling desktop systems to focus on user and application requests while file servers focus on reading, delivering, and writing data. Centralizing storage on file servers facilitates centralized system administration, simplifying operational tasks such as backups, storage expansion, and general storage reconfiguration without requiring desktop downtime or other interruptions in service. Beyond the standard features required by definition in a distributed file system, a more advanced distributed file system such as AFS simplifies data access and usability by providing a consistent view of the distributed file system from all client systems. It also supports redundancy, which means that failover services in conjunction with redundant storage devices provide multiple, synchronized copies of critical resources eliminating single points of failure. In the event of the failure of any critical resource, the file system automatically provides a replica of the failed entity that can therefore provide uninterrupted service. This eliminates single points of failure in the distributed file system environment. Lustre provides significant advantages over the aging distributed file systems that preceded it. These advantages will be discussed in more detail throughout this paper, but are highlighted here for convenience. Most importantly, Lustre runs on commodity hardware and uses object based disks for storage and metadata servers for storing file system metadata. This design provides a substantially more efficient division of labor between computing and storage resources. Replicated, failover metadata Servers (MDSs) maintain a transactional record of high-level file and file system changes. Distributed Object Storage Targets (OSTs) are responsible for actual file system I/O and for interfacing with storage devices, which will be explained in more detail in the next section. This division of labor and responsibility leads to a truly scalable file system andd more reliable recoverability from failure conditions by providing a unique combination of the advantages of journaling and distributed file systems. Lustre supports strong file and metadata locking semantics to maintain total coherency of the file systems even in the presence of concurrent access. File locking is distributed across the storage targets (OSTs) that constitute the file system, with each OST handling locks for the objects that it stores. [email protected] Page 1 of 1
2 Figure 1: Lustre Big Picture OST 1 MDS 1 (active) MDS 2 (failover) QSW Net OST 2 Linux OST Servers with disk arrays Lustre s (1,000 Lustre Lite ) Up to 10,000 s OST 3 OST 4 GigE OST 5 OST 6 3 rd party OST Appliances OST 7 Lustre Object Storage Targets (OST) Lustre uses an open networking API, the Portals API, made available by Sandia. At the top of the stack is a very sophisticated request processing layer provided by Lustre, resting on top of the Portals protocol stack. At the bottom is a network abstraction layer () that provides out-of-the-box support for multiple types of networks. Like Lustre's use of Linux, Lustre's use of open, flexible standards makes it easy to integrate new and emerging network and storage technologies. Lustre provides security in the form of authentication, authorization and privacy by leveraging existing security systems. This makes it easy to incorporate Lustre into existing enterprise security environments without requiring changes in Lustre itself. Similarly, Lustre leverages the underlying journaling file systems provided by Linux to enable persistent state recovery, enabling resiliency and recoverability from failed OSTs. Finally, Lustre's configuration and state information is recorded and managed using open standards such as XML and LDAP, making it easy to integrate Lustre management and administration into existing environments and sets of third-party tools. The remainder of this white paper provides a more detailed analysis of various aspects of the design and implementation of Lustre along with a roadmap for planned Lustre enhancements. The Lustre web site at provides additional documentation on Lustre along with the source code. For more information about Lustre, contact Cluster File Systems, Inc. via at [email protected]. 2. Lustre Functionality The Lustre file system provides several abstractions designed to improve both performance and scalability. At the file system level, Lustre treats files as objects that are located through metadata Servers (MDSs). Metadata Servers support all file system namespace operations, such as file lookups, file creation, and file and directory attribute manipulation, directing actual file I/O requests to Object Storage Targets (OSTs), which manage the storage that is physically located on underlying Object-Based Disks (OBDs). Metadata servers keep a transactional record of file system metadata changes and cluster status, and support failover so that the hardware and network outages that affect one metadata Server do not affect the operation of the file system itself. [email protected] Page 2 of 2
3 Figure 2: Interactions Between Lustre Subsystems configuration information, network connection details, & security management s directory operations, meta-data, & concurrency file I/O & file locking Meta-data Server (MDS) recovery, file status, & file creation Object Storage Targets (OST) Like other file systems, the Lustre file system has a unique inode for every regular file, directory, symbolic link, and special file. The regular file inodes hold references to objects on OSTs that store the file data instead of references to the actual file data itself. In existing file systems, creating a new file causes the file system to allocate an inode and set some of its basic attributes. In Lustre, creating a new file causes the client to contact a metadata server, which creates an inode for the file and then contacts the OSTs to create objects that will actually hold file data. Metadata for the objects is held in the inode as extended attributes for the file. The objects allocated on OSTs hold the data associated with the file and can be striped across several OSTs in a RAID pattern. Within the OST, data is actually read and written to underlying storage known as Object-Based Disks (OBDs). Subsequent I/O to the newly created file is done directly between the client and the OST, which interacts with the underlying OBDs to read and write data. The metadata server is only updated when additional namespace changes associated with the new file are required. Object Storage Targets handle all of the interaction between client data requests and the underlying physical storage. This storage is generally referred to as Object-Based Disks (OBDs), but is not actually limited to disks because the interaction between the OST and the actual storage device is done through a device driver. The characteristics and capabilities of the device driver mask the specific identity of the underlying storage that is being used. This enables Lustre to leverage existing Linux file systems and storage devices for its underlying storage while providing the flexibility required to integrate new technologies such as smart disks and new types of file systems. For example, Lustre currently provides OBD device drivers that support Lustre data storage within journaling Linux file systems such as ext3, JFS, ReiserFS and XFS. This further increases the reliability and recoverability of Lustre by leveraging the journaling mechanisms already provided in such file systems. Lustre can also be used with specialized 3rd party object storage targets like those provided by BlueArc. Lustre's division of actual storage and allocation into OSTs and underlying OBDs facilitates hardware development that can provide additional performance improvements in the guise of a new generation of smart disk drives which provide object-oriented allocation and data management facilities in hardware. Cluster File Systems is actively working with several storage manufacturers to develop integrated OBD support in disk drive hardware. This is analogous to the way in which the SCSI standard pioneered smart devices that offloaded much of the direct hardware interaction into the device's interface and drive controller hardware. Such smart, OBDaware hardware can provide instant performance improvements to existing Lustre installations and will continue the modern computing trend of offloading device-specific processing to the device itself. Beyond the storage abstraction that they provide, OSTs also provide a flexible model for adding new storage to an existing Lustre file system. New OSTs can easily be brought online and added to the pool of OSTs that a cluster's metadata servers can use for storage. Similarly, new OBDs can easily be added to the pool of underlying storage associated with any OST. Lustre provides a powerful and unique recovery mechanism used when any communication or storage failure occurs. If a server or network interconnect fails, the client incurs a timeout trying [email protected] Page 3 of 3
4 to access data. It can then query a LDAP server to obtain information about a replacement server and immediately direct subsequent requests to that server. An 'epoch' number on every storage controller, an incarnation number on the metadata server/cluster, and a generation number associated with connections between clients and other systems form the infrastructure for Lustre recovery, enabling clients and servers to detect restarts and select appropriate and equivalent servers. When failover OSTs are not available Lustre will automatically adapt. If an OST fails - except for raising administrative alarms - it will only generate errors when data cannot be accessed. New file creation operations will automatically avoid a malfunctioning OST. Figure 3: Lustre Software Modules Lustre File System (FS) data object API meta-data API Logical Object Volume (LOV) Driver MDS data object API Object Storage (OSC) 1 Networking Figure 4 a,b: Lustre Automatically Avoids Malfunctioning OST FS LOV FS LOV OSC 1 OSC 1 Inode N obj create obj1 obj 2 OST 1 OST 2 Skipped during failure Inode M obj create obj OST 1 OST 2 Direct OBD Direct OBD Direct OBD Direct OBD a) Normal functioning b) OST 1 failure [email protected] Page 4 of 4
5 Figure 5 a-d: Lustre Failover Mechanism FPath No response Timed out FPath MDS 1 (active) MDS 2 (failover) MDS 1 (dead) MDS 2 (failover) a) Normal functioning Fpath: request path in case of MDS/OST failover b) MDS 1 fails to respond FPath MDS 1 (dead) MDS 2 (active) MDS 1 (dead) MDS 2 (active) c) asks LDAP for new MDS d) FPath connects newly active MDS 2 3. File System Metadata and Metadata Servers File system metadata is "information about information", which essentially means that metadata is information about the files and directories that make up a file system. This information can simply be information about local files, directories, and associated status information, but can also be information about mount points for other file systems within the current file system, information about symbolic links, and so on. Many modern file systems use metadata journaling to maximize file system consistency. The file system keeps a journal of all changes to file system metadata, and asynchronously updates the file system based on completed changes that have been written to its journal. If a system outage occurs, file system consistency can be quickly restored simply by replaying completed transactions from the metadata journal. In Lustre, file system metadata is stored on a metadata server (MDS) and file data is stored in objects on the OSTs. This design divides file system updates into two distinct types of operations: file system metadata updates on the MDS and actual file data updates on the OSTs. File System namespace operations are done on the MDS so that they do not impact the performance of operations that only manipulate actual object (file) data. Once the MDS identifies the storage location of a file, all subsequent file I/O is done between the client and the OSTs. Using metadata servers to manage the file system namespace provides a variety of immediate opportunities for performance optimization. For example, metadata servers can maintain a cache of pre-allocated objects on various OSTs, expediting file creation operations. The scalability of metadata operations on Lustre is further improved through the use of an intent based locking scheme. For example, when a client wishes to create a file, it requests a lock from an MDS to enable a lookup operation on the parent directory, and also tags this request with the intended operation, namely file creation. If the lock request is granted, the MDS then uses the intention specified in the lock request to modify the directory, creating the requested file and returning a lock on the new file instead of the directory. [email protected] Page 5 of 5
6 Divorcing file system metadata operations from actual file data operations improves immediate performance, but also improves long-term aspects of the file system such as recoverability and availability. Actual file I/O is done directly between Object Storage Targets and client systems, eliminating intermediaries. General file system availability is improved by providing a single failover metadata server and by using distributed Object Storage Targets, eliminating any one MDS or OST as a single point of failure. In the event of wide-spread hardware or network outages, the transactional nature of the metadata stored on the metadata servers significantly reduces the time it takes to restore file system consistency by minimizing the chance of losing file system control information such as object storage locations and actual object attribute information. File System availability and reliability are critical to any computer system, but become even more significant when the number of clients and the amount of managed storage increases. Local file system outages only affect the usability of a single workstation, but central resource outages such as a distributed file system have the potential to affect the usability of hundreds or thousands of client systems that need to access that storage. Lustre's flexibility, reliable and highly-available design, and inherent scalability make Lustre well-suited for use as a cluster file system today, when cluster clients number in the hundreds or low thousands, and tomorrow, when the number of clients depending on distributed file system resources will only continue to grow. Figure 6: Lookup Intents Lustre lookup Lustre_mkdir Network lookup intent mkdir lookup mkdir Meta-data Server lock module exercise the intent Mds_mkdir a) Lustre mkdir lookup Network lookup File Server lookup create dir mkdir mkdir mkdir b) Conventional mkdir 4. Network Independence in Lustre As mentioned earlier in this paper, Lustre can be used over a wide variety of networks due to its use of an open Network Abstraction Layer. Lustre is currently in use over TCP and Quadrics (QSWNet) networks. Myrinet, Fibre Channel, Stargen and InfiniBand support are under development. Lustre's network-neutrality enables Lustre to instantly take advantage of performance improvements provided by network hardware and protocol improvements offered by new systems. Lustre provides unique support for heterogeneous networks. For example, it is possible to connect some clients over an Ethernet to the MDS and OST servers, and others over a QSW network, all in a single installation. [email protected] Page 6 of 6
7 Figure 7: Support for Heterogeneous Networks FS FS MDC LOV MDC LOV OSC 1 OSC 1 QSW QSW QSW OST 1 MDS Direct OBD Lustre also provides routers that can route the Lustre protocol between different kinds of supported networks. This is useful to connect to 3rd party OSTs that may not support all specialized networks available on generic hardware. Figure 8: Lustre Network Routing FS FS OSC 1 LOV Lustre s OSC 1 LOV Portals QSW Portals QSW QSW QSW Net MDS Portals Router 1 Portals Router 2 Portals Routers Portals Router 3 Portals Router 4 OST 1 OST 2 OST 3 OST 4 OST 5 OST 6 OST 7 OST 8 3 rd party OST cluster (similar to MCR cluster at LLNC) Lustre provides a very sophisticated request processing layer on top of the Portals protocol stack, originally developed by Sandia but now available to the Open Source community. Below this is the network data movement layer responsible for moving data vectors from one system to another. Beneath this layer, the Portals message passing layer sits on top of the network abstraction layer, which finally defines the interactions between underlying network devices. The Portals stack provides support for high-performance network data transfer, such as Remote [email protected] Page 7 of 7
8 Direct Memory Access (RDMA), direct OS-bypass I/O, and scatter gather I/O (memory and disk) for more efficient bulk data movement. 5. Lustre Administration Overview Lustre's commitment to using open standards such as Linux, the Portals Network Abstraction Layer, and existing Linux journaling file systems such as ext3 for journaling metadata storage is reflected in its commitment to creating open administrative and configuration information. Lustre's configuration information is stored in extensible Markup Language (XML) files that conform to a simple Document Type Definition (DTD), which is published in the open Lustre documentation. Maintaining configuration information in standard text files means that it can easily be manipulated using simple tools such as text editors. Maintaining it in a consistent fashion with a published DTD makes it easy to integrate Lustre configuration into third-party and open source administration utilities. These configuration files can be generated and updated using the lmc (Lustre make configuration) configuration utility. The lmc utility quickly generates initial configuration files, even for very large clusters complex clusters involving 100's of OSTs, routers and clients. Lustre is integrated with open network data resources and administrative mechanisms such as the Light-Weight Directory Access Protocol (LDAP) and the Simple Network Management Protocol (SNMP). The LMC utility can convert LDAP based configuration to and from XML based configuration information. The LDAP infrastructure provides redundancy and assists with cluster recovery. To provide enterprise wide monitoring, Lustre exports status and configuration information into the SNMP agent, offering a Lustre MIB to the management stations. Lustre provides several basic, command-line oriented utilities for initial configuration and administration. The lctl (Lustre control) utility can be used to perform low level Lustre network and device configuration tasks, as well as batch driven tests to check the sanity of a cluster. The lconf (Lustre configuration) utility enables administrators to configure Lustre on specific nodes using userspecified configuration files. The Lustre documentation provides extensive examples of using these commands to configure, start, reconfigure, and stop or restart Lustre services. 6. Future Directions for Lustre Lustre's distributed design and use of metadata servers and Object Storage Targets as intermediaries between client requests and actual data access leads to a very scalable file system. The next few sections highlight several issues on the Lustre roadmap that will help to further improve the performance, scalability, and security of Lustre. 6.1 The Lustre Global Namespace As mentioned earlier in this paper, distributed file systems provide a number of administrative advantages. From the end-user perspective, the primary advantages of distributed file systems are that they provide access to substantially more storage than could be physically attached to a single system, and that this storage can be accessed from any authorized workstation. However, accessing shared storage from different systems can be confusing if there is no uniform way of referring to and accessing that storage. The classic way of providing a single way of referring to and accessing the files in a distributed filesystem is by providing a "global namespace". A global namespace is typically a single directory on which an entire distributed filesystem is made available to users. This is known as mounting the distributed filesystem on that directory. In the AFS distributed file system, the global namespace is the directory /afs, which provides hierarchical access to filesets on various servers that are mounted as subdirectories somewhere under the /afs directory. When traversing fileset mountpoints, AFS does not store configuration data on the client to find the target fileset, but instead contacts a fileset location server to determine the server on which the fileset is physically stored. In AFS, [email protected] Page 8 of 8
9 mountpoint objects are represented as symbolic links that point to a fileset name/identifier. This requires that AFS mount objects must be translated from symbolic links to specific directories and filesets whenever you mount a fileset. Unfortunately, existing file systems like AFS contain hardwired references to mountpoints for the file systems. These file systems must therefore always be found at those locations, and can only be found at those locations. Unlike existing distributed filesystems, Lustre intends to provide the best of both worlds by providing a global namespace that can be easily grafted onto any directory in an existing Linux filesystem. Once a Lustre filesystem is mounted, any authenticated client can access files within it using the same path and filename, but the initial mount point for the Lustre filesystem is not pre-defined and need not be the same on every client system. If desired, a uniform mountpoint for the Lustre filesystem can be enforced administratively by simply mounting the Lustre filesystem on the same directory on every client, but this is not mandatory. Lustre intends to simplify mounting remote storage by setting special bits on directories that are to be used as mount points, and then storing the mount information in a special file in each such directory. This is completely compatible with every existing Linux file system, eliminates the extra overhead required in obtaining the mount information from a symbolic link, and makes it possible to identify mountpoints without actually traversing them unless you actually need information about the remote file system. Figure 9: Lustre Global Namespace Filter Lustre Mount Points root Real content overlays mntinfo mntinfo <type> lustre <srv> server name <tgt> /root <type> nfs <srv> srv 2 <tgt> /home mntinfo home share pete john doug anne An easily overlooked benefit of the Lustre mount mechanism is that it provides greater flexibility than existing Linux mount mechanisms. Standard Linux client systems use the file /etc/fstab to maintain information about all of the file systems that should be mounted on that client. The Lustre mount mechanism transfers the responsibility for maintaining mount information from a single, per-client file into the client file system. The Lustre mount mechanism also makes it easy to mount other file systems within a Lustre file system, without requiring that each client be aware of all file systems mounted within Lustre. Lustre is therefore not only a powerful distributed file system in its own right, but also serves as a powerful integration mechanism for other existing distributed file systems Metadata and File I/O Performance Improvements One of the first optimizations to be introduced is the use of a writeback cache for file writes to provide higher overall performance. Currently, Lustre writes are write-though, which means that write requests are not complete until the data is actually flushed to the target OSTs. In busy clusters, this can impose a considerable delay on every file write operation. The use of a writeback cache, where file writes are journaled and committed asynchronously, provides a great deal of promise for substantially higher-performance in file write requests. [email protected] Page 9 of 9
10 The Lustre file system currently stores and updates all file metadata (except allocation data, which is held on the OST) through a single (failover) metadata server. While simple, accurate, and already very scalable, depending upon a single metadata server can reduce the performance of metadata operations in Lustre. Metadata performance can be greatly improved by implementing clustered metadata servers. Distributing metadata information across the cluster will also result in distributing the metadata processing load across the cluster, improving the overall throughput of metadata operations. Figure 10: Meta-data Clustering MDS clustering load balance info 1 MDS 1 calculate correct MDS Meta-data Update Records st 3rd 2nd 4th Occasional distributed MDS - MDS update MDS 2 MDS 3 Clustered Meta-data Servers A writeback cache for Lustre metadata servers is also being considered. If a writeback cache for metadata is present, metadata updates would be first written to this cache and would subsequently be flushed to persistent storage on the metadata servers at a later time. This will dramatically improve the latency of updates as seen by client systems. It also enables batch metadata updates, which could reduce communications and increase parallelism. Figure 11: Lustre Meta-data Writeback Cache Writeback Cache pre-allocate inodes 1 Many -only Meta-data Updates reintegrate batch MD updates MDS Read scalability is achieved in a very different manner. Good examples of potential bottlenecks in a clustered environment are system files or centralized binaries that are stored in Lustre but which are required by multiple clients throughout the cluster at boot-time. If multiple clients were rebooted at the same time, they would all need to access the same system files simultaneously. However, if every client has to read the data from the same server, the network load at that server could be extremely high and the available network bandwidth at that server could pose a serious bottleneck for general file system performance. Using a collaborative cache, where frequently requested files could be cached across multiple servers, would help distribute the read load across multiple nodes, reducing the bandwidth requirements at each. [email protected] Page 10 of 10
11 A second example arises in situations like video servers where a large amount of data is read and written out to network attached devices. Lustre will provide QOS guarantees to meet these situations with high rate I/O. A collaborative cache for Lustre will be added which enables multiple OSTs to cache data that is frequently used by multiple clients. In Lustre, read requests for a file are serviced in two phases: a lock request precedes the actual read request, and while the OST is providing the read lock, it can asses where in the cluster the data has already been cached to include a referral to that node for reading. The Lustre collaborative cache is globally coherent. Figure 12: Collaborative Read Cache for Lustre Lustre FS OSC WB Cache redirected read requests Peer Node () with a cache OST FS COBD OSC lock required for file read lock & COBD referral Dedicated Cache Server OST OST fill caches OST COBD Direct OBD OSC 6.3. Advanced Security File System security is a very important aspect of a distributed file system. The standard aspects of security are authentication, authorization, and encryption. While SANs are largely unprotected, Lustre provides the OSTs with a secure network attached disk (NASD) features. Rather than selecting and integrating a specific authentication service, Lustre can easily be integrated with existing authentication mechanisms using the Generic Security Service Application Programming Interface (GSS- API), an open standard that provides secure session communications supporting authentication, data integrity, and data confidentiality. Lustre authentication will support Kerberos 5 and PKI mechanisms as a backend for authentication. Lustre intends to provide authorization using access control lists that follow the POSIX ACL semantics. The flexibility and additional capabilities provided by ACLs are especially important in clusters that may support thousands of nodes and user accounts. Data privacy is expected to be ensured using an encryption mechanism such as that provided by the StorageTek/University of Minnesota SFS file system, where data can actually be automatically encrypted and decrypted on the client based on a shared key protocol which makes file sharing by project a natural operation. The OSTs are protected with a very efficient capability-based security mechanism which provides very significant optimizations over the original NASD protocol. [email protected] Page 11 of 11
12 Figure 13: Lustre Read File Security Kerberos Step 7: Get SFS file key Step 1: Authenticate user, get session key Step 8: Decrypt file data Step 2: Authenticated open RPCs Step 4: Get OST capability MDS Step 3: Traverse ACL s Step 6: Read encrypted file data Step 5: Send capability OST 6.4 Further Features Sharing existing file systems in a cluster file system to provide redundancy and load balancing to existing operations is the ultimate dream for small scale clusters. Lustre's careful separation of protocols and code modules makes this a relatively simple target. Lustre will provide file system sharing with full coherency by providing support for SAN networking, together with a combined MDS/OST which exports both the data and metadata API's from a single file system. File system snapshots are a cornerstone of enterprise storage management. Lustre will provide fully featured snapshots, including rollback, old file directories, and copy on write semantics. This will be implemented as a combination of snapshot infrastructure on the client, OSTs, and metadata servers, each requiring only a small addition to its infrastructure. 7. Summary Lustre is an advanced storage architecture and distributed file system that provides significant performance, scalability, and flexibility to computing clusters, enterprise networks, and shared-data in network-oriented computing environments. Lustre uses an object storage model for file I/O, and storage management to provide a substantially more efficient division of labor between computing and storage resources. Replicated, failover metadata Servers (MDSs) maintain a transactional record of high-level file and file system changes. Distributed Object Storage Targets (OSTs) are responsible for actual file system I/O and for interfacing with local or networked storage devices known as Object-Based Disks (OBDs). Lustre leverages open standards such as Linux, XML, LDAP, SNMP, readily available open source libraries, and existing file systems to provide a powerful, scalable, reliable distributed file system. Lustre uses sophisticated, cutting-edge failover, replication, and recovery techniques to eliminate downtime and to maximize file system availability, thereby maximizing performance and productivity. Cluster File Systems, Inc., creators of Lustre, are actively working with hardware manufacturers to help develop the next generation of intelligent storage devices, where hardware improvements can further offload data processing from the software components of a distributed file system to the storage devices themselves. [email protected] Page 12 of 12
13 Lustre is open source software licensed under the GPL. Cluster File Systems provides customization, contract development, training, and service for Lustre. In addition to service, our partners can also provide packaged solutions under their own licensing terms. Contact Cluster File Systems, Inc. at You can obtain additional information about Lustre, including the current documentation and source code, from the Lustre Web site at Lustre Whitepaper Version 1.0: November 11 th, 2002 Page 13 of 13
How To Build A Clustered Storage Area Network (Csan) From Power All Networks
Power-All Networks Clustered Storage Area Network: A scalable, fault-tolerant, high-performance storage system. Power-All Networks Ltd Abstract: Today's network-oriented computing environments require
New Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
How To Virtualize A Storage Area Network (San) With Virtualization
A New Method of SAN Storage Virtualization Table of Contents 1 - ABSTRACT 2 - THE NEED FOR STORAGE VIRTUALIZATION 3 - EXISTING STORAGE VIRTUALIZATION METHODS 4 - A NEW METHOD OF VIRTUALIZATION: Storage
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Lustre Networking BY PETER J. BRAAM
Lustre Networking BY PETER J. BRAAM A WHITE PAPER FROM CLUSTER FILE SYSTEMS, INC. APRIL 2007 Audience Architects of HPC clusters Abstract This paper provides architects of HPC clusters with information
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
Making the Move to Desktop Virtualization No More Reasons to Delay
Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known
HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
How to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Diagram 1: Islands of storage across a digital broadcast workflow
XOR MEDIA CLOUD AQUA Big Data and Traditional Storage The era of big data imposes new challenges on the storage technology industry. As companies accumulate massive amounts of data from video, sound, database,
Cray DVS: Data Virtualization Service
Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with
POWER ALL GLOBAL FILE SYSTEM (PGFS)
POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
Sun Storage Perspective & Lustre Architecture. Dr. Peter Braam VP Sun Microsystems
Sun Storage Perspective & Lustre Architecture Dr. Peter Braam VP Sun Microsystems Agenda Future of Storage Sun s vision Lustre - vendor neutral architecture roadmap Sun s view on storage introduction The
Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata
Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached
STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER
STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
Software to Simplify and Share SAN Storage Sanbolic s SAN Storage Enhancing Software Portfolio
Software to Simplify and Share SAN Storage Sanbolic s SAN Storage Enhancing Software Portfolio www.sanbolic.com Table of Contents About Sanbolic... 3 Melio File System... 3 LaScala Volume Manager... 3
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
Sanbolic s SAN Storage Enhancing Software Portfolio
Software to Simplify and Share SAN Storage Sanbolic s SAN Storage Enhancing Software Portfolio Overview of Product Suites www.sanbolic.com Version 2.0 Page 2 of 10 Contents About Sanbolic... 3 Sanbolic
BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything
BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
<Insert Picture Here> Managing Storage in Private Clouds with Oracle Cloud File System OOW 2011 presentation
Managing Storage in Private Clouds with Oracle Cloud File System OOW 2011 presentation What We ll Cover Today Managing data growth Private Cloud definitions Oracle Cloud Storage architecture
The Panasas Parallel Storage Cluster. Acknowledgement: Some of the material presented is under copyright by Panasas Inc.
The Panasas Parallel Storage Cluster What Is It? What Is The Panasas ActiveScale Storage Cluster A complete hardware and software storage solution Implements An Asynchronous, Parallel, Object-based, POSIX
Distributed File System Choices: Red Hat Storage, GFS2 & pnfs
Distributed File System Choices: Red Hat Storage, GFS2 & pnfs Ric Wheeler Architect & Senior Manager, Red Hat June 27, 2012 Overview Distributed file system basics Red Hat distributed file systems Performance
Data Management in an International Data Grid Project. Timur Chabuk 04/09/2007
Data Management in an International Data Grid Project Timur Chabuk 04/09/2007 Intro LHC opened in 2005 several Petabytes of data per year data created at CERN distributed to Regional Centers all over the
NETWORK ATTACHED STORAGE DIFFERENT FROM TRADITIONAL FILE SERVERS & IMPLEMENTATION OF WINDOWS BASED NAS
INTERNATIONAL International Journal of Computer JOURNAL Engineering OF COMPUTER and Technology (IJCET), ENGINEERING ISSN 0976-6367(Print), ISSN 0976 & 6375(Online) TECHNOLOGY Volume 4, Issue (IJCET) 3,
Lessons learned from parallel file system operation
Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
ORACLE DATABASE 10G ENTERPRISE EDITION
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
<Insert Picture Here> Oracle Cloud Storage. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska
Oracle Cloud Storage Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska Oracle Cloud Storage Automatic Storage Management (ASM) Oracle Cloud File System ASM Dynamic
PolyServe Matrix Server for Linux
PolyServe Matrix Server for Linux Highly Available, Shared Data Clustering Software PolyServe Matrix Server for Linux is shared data clustering software that allows customers to replace UNIX SMP servers
Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise
Vicom Storage Virtualization Engine Simple, scalable, cost-effective storage virtualization for the enterprise Vicom Storage Virtualization Engine (SVE) enables centralized administration of multi-platform,
Clustering Windows File Servers for Enterprise Scale and High Availability
Enabling the Always-On Enterprise Clustering Windows File Servers for Enterprise Scale and High Availability By Andrew Melmed Director of Enterprise Solutions, Sanbolic, Inc. April 2012 Introduction Microsoft
Network File System (NFS) Pradipta De [email protected]
Network File System (NFS) Pradipta De [email protected] Today s Topic Network File System Type of Distributed file system NFS protocol NFS cache consistency issue CSE506: Ext Filesystem 2 NFS
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
List of Figures and Tables
List of Figures and Tables FIGURES 1.1 Server-Centric IT architecture 2 1.2 Inflexible allocation of free storage capacity 3 1.3 Storage-Centric IT architecture 4 1.4 Server upgrade: preparation of a new
New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN
New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN Updated: May 19, 2015 Contents Introduction... 1 Cloud Integration... 1 OpenStack Support... 1 Expanded
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor
The Sierra Clustered Database Engine, the technology at the heart of
A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel
Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support
VERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
How To Encrypt Data On A Network With Cisco Storage Media Encryption (Sme) For Disk And Tape (Smine)
Data Sheet Cisco Storage Media Encryption for Disk and Tape Product Overview Cisco Storage Media Encryption (SME) protects data at rest on heterogeneous tape drives, virtual tape libraries (VTLs), and
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
Distributed File Systems. Chapter 10
Distributed File Systems Chapter 10 Distributed File System a) A distributed file system is a file system that resides on different machines, but offers an integrated view of data stored on remote disks.
Fax Server Cluster Configuration
Fax Server Cluster Configuration Low Complexity, Out of the Box Server Clustering for Reliable and Scalable Enterprise Fax Deployment www.softlinx.com Table of Contents INTRODUCTION... 3 REPLIXFAX SYSTEM
owncloud Architecture Overview
owncloud Architecture Overview owncloud, Inc. 57 Bedford Street, Suite 102 Lexington, MA 02420 United States phone: +1 (877) 394-2030 www.owncloud.com/contact owncloud GmbH Schloßäckerstraße 26a 90443
Secure Backup and Recovery Whitepaper. Securing Data in Backup and Disaster Recovery Sites with Decru DataFort Appliances
Secure Backup and Recovery Whitepaper Securing Data in Backup and Disaster Recovery Sites with Decru DataFort Appliances September 2005 Introduction... 2 Decru DataFort Storage Security Appliances... 2
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
High Availability Essentials
High Availability Essentials Introduction Ascent Capture s High Availability Support feature consists of a number of independent components that, when deployed in a highly available computer system, result
Parallels Cloud Storage
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.
IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise
Large Scale Storage. Orlando Richards, Information Services [email protected]. LCFG Users Day, University of Edinburgh 18 th January 2013
Large Scale Storage Orlando Richards, Information Services [email protected] LCFG Users Day, University of Edinburgh 18 th January 2013 Overview My history of storage services What is (and is not)
WHITE PAPER. QUANTUM LATTUS: Next-Generation Object Storage for Big Data Archives
WHITE PAPER QUANTUM LATTUS: Next-Generation Object Storage for Big Data Archives CONTENTS Executive Summary....................................................................3 The Limits of Traditional
HDFS Architecture Guide
by Dhruba Borthakur Table of contents 1 Introduction... 3 2 Assumptions and Goals... 3 2.1 Hardware Failure... 3 2.2 Streaming Data Access...3 2.3 Large Data Sets... 3 2.4 Simple Coherency Model...3 2.5
High Availability Solutions for the MariaDB and MySQL Database
High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment
IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect
IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide
Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition
Chapter 11: File System Implementation 11.1 Silberschatz, Galvin and Gagne 2009 Chapter 11: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation
Storage Architectures for Big Data in the Cloud
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas
Technology Insight Series
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
Linux Powered Storage:
Linux Powered Storage: Building a Storage Server with Linux Architect & Senior Manager [email protected] June 6, 2012 1 Linux Based Systems are Everywhere Used as the base for commercial appliances Enterprise
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Data Center Performance Insurance
Data Center Performance Insurance How NFS Caching Guarantees Rapid Response Times During Peak Workloads November 2010 2 Saving Millions By Making It Easier And Faster Every year slow data centers and application
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Availability Digest. www.availabilitydigest.com. Raima s High-Availability Embedded Database December 2011
the Availability Digest Raima s High-Availability Embedded Database December 2011 Embedded processing systems are everywhere. You probably cannot go a day without interacting with dozens of these powerful
RADOS: A Scalable, Reliable Storage Service for Petabyte- scale Storage Clusters
RADOS: A Scalable, Reliable Storage Service for Petabyte- scale Storage Clusters Sage Weil, Andrew Leung, Scott Brandt, Carlos Maltzahn {sage,aleung,scott,carlosm}@cs.ucsc.edu University of California,
Release Notes. LiveVault. Contents. Version 7.65. Revision 0
R E L E A S E N O T E S LiveVault Version 7.65 Release Notes Revision 0 This document describes new features and resolved issues for LiveVault 7.65. You can retrieve the latest available product documentation
owncloud Architecture Overview
owncloud Architecture Overview Time to get control back Employees are using cloud-based services to share sensitive company data with vendors, customers, partners and each other. They are syncing data
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
High Performance Computing OpenStack Options. September 22, 2015
High Performance Computing OpenStack PRESENTATION TITLE GOES HERE Options September 22, 2015 Today s Presenters Glyn Bowden, SNIA Cloud Storage Initiative Board HP Helion Professional Services Alex McDonald,
Distributed File Systems
Distributed File Systems File Characteristics From Andrew File System work: most files are small transfer files rather than disk blocks? reading more common than writing most access is sequential most
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
CHAPTER 17: File Management
CHAPTER 17: File Management The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides
LUSTRE FILE SYSTEM High-Performance Storage Architecture and Scalable Cluster File System White Paper December 2007. Abstract
LUSTRE FILE SYSTEM High-Performance Storage Architecture and Scalable Cluster File System White Paper December 2007 Abstract This paper provides basic information about the Lustre file system. Chapter
StorPool Distributed Storage Software Technical Overview
StorPool Distributed Storage Software Technical Overview StorPool 2015 Page 1 of 8 StorPool Overview StorPool is distributed storage software. It pools the attached storage (hard disks or SSDs) of standard
Enterprise Backup and Restore technology and solutions
Enterprise Backup and Restore technology and solutions LESSON VII Veselin Petrunov Backup and Restore team / Deep Technical Support HP Bulgaria Global Delivery Hub Global Operations Center November, 2013
GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
Solaris For The Modern Data Center. Taking Advantage of Solaris 11 Features
Solaris For The Modern Data Center Taking Advantage of Solaris 11 Features JANUARY 2013 Contents Introduction... 2 Patching and Maintenance... 2 IPS Packages... 2 Boot Environments... 2 Fast Reboot...
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Application Brief: Using Titan for MS SQL
Application Brief: Using Titan for MS Abstract Businesses rely heavily on databases for day-today transactions and for business decision systems. In today s information age, databases form the critical
PIVOTAL CRM ARCHITECTURE
WHITEPAPER PIVOTAL CRM ARCHITECTURE Built for Enterprise Performance and Scalability WHITEPAPER PIVOTAL CRM ARCHITECTURE 2 ABOUT Performance and scalability are important considerations in any CRM selection
An Oracle White Paper July 2014. Oracle ACFS
An Oracle White Paper July 2014 Oracle ACFS 1 Executive Overview As storage requirements double every 18 months, Oracle customers continue to deal with complex storage management challenges in their data
VMware Virtual Machine File System: Technical Overview and Best Practices
VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:
IBM Tivoli Storage Manager Version 7.1.4. Introduction to Data Protection Solutions IBM
IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM Note: Before you use this
VERITAS Business Solutions. for DB2
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets
The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets!! Large data collections appear in many scientific domains like climate studies.!! Users and
Storage Technologies for Video Surveillance
The surveillance industry continues to transition from analog to digital. This transition is taking place on two fronts how the images are captured and how they are stored. The way surveillance images
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options
Maximizing Your Investment in Citrix XenDesktop With Sanbolic Melio By Andrew Melmed, Director of Enterprise Solutions, Sanbolic Inc. White Paper September 2011 www.sanbolic.com Introduction This white
