SFA10K-X & SFA10K-E. ddn.com. Storage Fusion Architecture TM. DDN Whitepaper

Size: px
Start display at page:

Download "SFA10K-X & SFA10K-E. ddn.com. Storage Fusion Architecture TM. DDN Whitepaper"

Transcription

1 DDN Whitepaper Storage Fusion Architecture TM SFA10K-X & SFA10K-E Breaking Down Storage Barriers by Providing Extreme Performance in Both Bandwidth and IOPS

2 Table of Contents Abstract 3 Introduction 3 Introducing the SFA10K-X 5 SFA10K-X Storage OS Architecture 6 Active/Active Model 7 Data Protection 9 RAID 9 Hot Spares 10 Battery Backed Write-Back Cache 10 Mirrored Write-Back Cache 10 Mirrored Transaction Journal 10 Metadata Mirrored n-ways 11 Parity Check On Read SATAssure 11 Data Integrity Field SATAssure 11 Storage System Efficiencies 12 Partial Disk Rebuild 12 Real-time Adaptive Cache Technology (ReACT) 12 Rebuild Priority 13 Software Summary 13 SFA10K-X Hardware Architecture 14 RAID Processing 14 IO Channels and Architecture 14 Cache 15 Back End Disk Connectivity 15 Hardware Summary 15 SFA OS and Embedded File Systems 16 Embedded Application Capability 16 PCIe Device Dedication 17 Virtual Disk Driver 17 Reduction in Equipment, Infrastructure and Complexity 18 SFA10K Family: Summary 19 2

3 Abstract High performance storage systems usually fall into one of two categories: those with high IOPS capability or those with high throughput capability. In the world of supercomputing, where the focus is usually on massive scale, the preference (and need) has traditionally favored storage systems with high throughput capabilities. The move to multi-core processors coupled with ever increasing numbers of nodes in supercomputing clusters has fundamentally changed the data patterns and storage system requirements. Traditional storage systems are not capable of both high IOPS and high throughput. This paper presents a new storage architecture which can adapt to modern compute environments and the unique data storage challenges they present. Additionally we will examine an architecture that allows for embedding clustered file systems directly into the storage resulting in significant reductions in both complexity and latency. Introduction Across the storage industry, the vast majority of block storage systems have been deliberately designed to deliver random access I/O to serve transactional applications. Of primary focus to storage system manufacturers are transactional applications, such as reservation systems, banking applications, databases, and messaging applications and batch processing jobs. These compute processes utilized fixed, structured storage formats, which are commonly referred to as structured data. With structured data, information is communicated to/from storage in small blocks, and accessed in a generally random pattern which requires high Input/Output Operations per Second (IOPS) to deliver sufficient performance. In recent years, the digital content revolution has enabled personal, social and business computing, as well as the ability to perform predictive simulation, weather forecasting and process satellite imagery has resulted in an explosion both in the size and number of files stored online. According to IDC, by 2012, 85% of enterprise storage consumption will be made up of various types file based or unstructured data. These files are referred to as unstructured data because the files are usually read or written in whole, utilizing large blocks, often sequentially to/from their storage systems. Figure 1 - Worldwide Enterprise Disk Storage Consumption by Segment: IDC

4 The growth of unstructured data has necessitated change in storage technology; driving the creation of systems optimized for unstructured data OR high random IOPS (which are not designed to handle large sequential access well). This market opportunity gave rise to storage systems optimized for sequential access or high bandwidth such as DataDirect Networks Silicon Storage Architecture (S2A) product line. S2A utilizes specialized hardware to read and write unstructured data at the highest performance levels both for read and write and with no degradation during system correction events such as drive rebuilds. Just as systems optimized for random IOPS do not excel at storing large sequential files, systems optimized for bandwidth are not necessarily class-leading in transactional data patterns. To date, premium storage systems fall into two categories: those optimized for random IOPS or those optimized for bandwidth. The explosive growth in unstructured data favors storage systems optimized for bandwidth, as the growth in structured data as a percentage of aggregate market demand is slowing year over year (Figure 1). This growth has largely coincided with increasing CPU speeds. As processor frequency approached the upper limits of what is physically possible with siliconbased technology, CPU manufactures found a different avenue to increase compute power per CPU: by combining multiple processing cores onto a single CPU socket, resulting in Moore s Law being extended by several years. Recently, the number of processing cores (or simply cores ) per chip in the commodity space has increased to the point that four cores are common and eight core processors are just around the corner (Figure 2). Figure 2 - Growth over time in CPU cores per processor socket 4

5 The increase in the number of cores per chip and the number of threads per core allows multiple processes to run simultaneously, often producing multiple file accesses at once. What the individual running processes view as sequential access, the storage system sees as increasingly random access as data must be read or written in multiple locations on the storage media rather than stepping sequentially through one location. Further, access to hundreds or thousands of files simultaneously, via a single file system namespace or the effect of thousands of threads writing a single file - requires substantial POSIX metadata operations that require high-speed random IOPS for optimal response. The need for multi-threaded, simultaneous file access on a massive scale is not a future s requirement, it s happening today. Currently, the top supercomputers have over 200,000 CPU cores in their compute clusters, resulting in potentially hundreds of thousands of simultaneous file writes during checkpoint operations. Leading websites have tens of billions of files stored; accessed at any time with hundreds of thousands of file accesses per second. The continuous increases in processing cores per socket will allow clients to access more and more files simultaneously. This multi-threaded IO will produce storage access patterns that are increasingly random, requiring high IOPS capability. Thus, a storage system designed to serve large files to multi-core compute environments must now be optimized to support mixed-mode access, offering both high random IOPS and high bandwidth. Seemingly, storage systems can be optimized to serve either high random IOPS or high bandwidth. Conventional wisdom says that systems can excel at one or the other characteristic, but not both. Conventional wisdom also once said a storage system could not write as fast as it reads at peak bandwidth levels, but DDN s Silicon Storage Architecture proved to breakthrough that long-standing belief. Today, a storage system can offer extreme performance in both random IOPS and bandwidth That system utilizes DDN s new Storage Fusion Architecture and is known as the SFA10K-X. Introducing the SFA10K-X The SFA10K-X builds on the revolutionary Storage Fusion Architecture (SFA) introduced by DataDirect Networks. SFA is based on a unique combination of highly parallelized software, industry proven data integrity algorithms and high-speed hardware components to produce a storage controller that performs in the extreme range of both bandwidth and IOPS. By marrying a state of the art, multi-threaded data integrity engine to best of breed in processors, interconnects, busses, memory architecture and media technologies, SFA10K-X capitalizes on the same advancements in technology as the clients it serves. This strategy ensures that as these technologies evolve and improve, SFA performance will improve along with them. 5

6 16 x 8 Gb FibreChannel Host Ports 8 x QDR Infiniband Host Ports Optimized Drive Support SFA Interface Virtualization 8 GB High-Speed Cache 60 Gb/s Cache Link SFA Interface Virtualization 8 GB High-Speed Cache Highly Parallelized Storage Processing SATA Leading Capacity and Cost-Optimized Bandwidth Internal SAS Switching Internal SAS Switching 480 Gb/s Internal SAS Data Network SAS Balanced Mix of IOPS Capacity & Bandwidth P Q RAID 5,6 RAID 6 SFA RAID 5, m 3 m 4 m 5 m P RAID 5,6 Q RAID 6 SFA RAID 5, 6 SFA RAID 10 SSD Delivering Unrivaled IOPS for Transactional Applications 1 1 m SFA RAID 1 Figure 3 - SFA10K-X Active/Active RAID Controller Architectural Overview The SFA10K-X employs RAID, data integrity and data management software written from the ground up to take advantage of multi-core processors and modern bus architectures. This highly threaded architecture allows performance to linearly scale with advances in underlying hardware. This same architecture allows the SFA10K-X to do what no other RAID controller has been able to do to date: to perform in the extreme range of both throughput and IOPS. The SFA10K-X delivers random IOPS over 1 million burst to cache and over 840,000 sustained 4K IOPS to SSDs. Sequential block throughput performance is 15GB/s for simultaneous reads and writes. Designed to house the most scalable unstructured file data, the system supports up to up to 1,200 drives of raw storage while enabling a combination of SAS, SATA or SSD drives. SFA10K-X Storage OS Architecture The SFA10K-X storage operating system was written from the ground up to fully leverage the power of multi-core processors. A storage controller though is made up of many components and the design goal of the Storage Fusion Architecture Operating System (SFA OS) was to get the maximum performance out of every component in the system. Thus, it is not only the RAID engine that is optimized, but also the cache engine, data movers, drivers, schedulers and much more. All of these storage subsystems are highly parallelized and multi-threaded, creating a powerful, scalable software architecture that serves as the basis for high performance, high availability and rich features that will grow over time. 6

7 Active/Active Model From conception, the SFA10K-X had been designed to work in an Active/Active fashion. There are essentially two ways to implement Active/Active operation in a redundant RAID controller: Active/Active with Distributed Locking, or Active/Active with Dynamic Routing. Active/Active with Distributed Locking is the method that has been used historically for DDN s S2A products. With this method, each logical unit is on-line to both controllers. Both controllers cache data for the logical unit, both controllers access the physical disks that contain the logical unit directly, and distributed locks are used to guarantee storage register semantics and write atomicity. The locks are communicated across an inter-controller link (ICL). Because the S2A is optimized for throughput and has relatively little cache, ICL traffic is low and does not impact performance, however experience has shown that distributed locking slows IO/s performance. This is partly due to the ICL communication latency, but has more to do with the lock and cache lookup times. Thus, in a system destined to perform at extreme IOPS levels, a different method had to be implemented. Storage Fusion Architecture implements an Active/Active host presentation model with routing-based data access and full cache coherency. The SFA OS provides preference indicators and target port groups for its SCSI target implementation and thus has the notion of a preferred controller and the preferred RAID Processor (RP). In this approach, each logical unit is online to both controllers, but only one controller takes primary ownership for a given logical unit at a given time. The controller that masters the logical unit caches data for the logical unit and accesses the physical disks that contain that logical unit s data. Additionally, the controller that masters the logical unit is the preferred controller for that logical unit, and IO requests received by the non-preferred controller are forwarded to the controller that masters the logical unit. This intelligent approach to storage management requires no distributed locking. Instead, IO requests are forwarded (Figure 4). When mirrored write-back caching is performed, the data must be transferred to both controllers and so there are no additional data transfers. Read data does have to be transferred across the ICL for reads that are not sent to the preferred controller, however these reads benefit from the logical unit s read-ahead cache. When in write-thru mode, write data does have to be transferred across the ICL for writes that are not sent to the preferred controller. 7

8 Write to Preferred Path Read from Preferred Path Write to Non- Preferred Path Read from Non- Preferred Path Logical Disk Client Logical Disk Client Logical Disk Client Logical Disk Client Preferred Path Preferred Path Logical Disk Master Controller ICL Partner Controller Logical Disk Master Controller ICL Partner Controller Logical Disk Master Controller ICL Partner Controller Logical Disk Master Controller ICL Partner Controller Cache Cache Cache Cache Cache Cache Cache Cache Data Transfer Cache Mirror ICL = Inter-Controller Link Figure 4 Active/Active Routing Depicting IO Scenarios There are several advantages to the Active/Active with Routing method. The main advantage of this approach is that no distributed locking is required and that leads to better IO performance and a very clean failover implementation, which leads to enhanced data integrity. Another advantage is that the caching, both read and write, is more efficient and effective because all of the cache data resides in a single location. Virtual disk clients need at least one path to each controller to allow failover and thus need a multi-path IO driver to recognize that the logical units presented by the two controllers for one logical disk represent the same logical disk. It is important that the multi-path IO driver is able to understand the standard SCSI preference indicators and target port groups. Such drivers are readily available for most major operating systems including Microsoft Windows server products and Linux. Each SFA storage pool (aka: RAID set) has a preferred home attribute that allows specification of which controller and RP should master the logical disks or virtual disks that are realized with that storage pool. Each logical disk has a current home attribute that indicates the controller that is actually mastering the logical unit currently, and this will change dynamically during failover and failback or when the preferred home attribute is changed. The SCSI preference indicators dynamically change to reflect the current home and the MPIO drivers are designed to dynamically adapt to changes in the SCSI preference indicators, so a proper MPIO driver will send most IO requests to the controller that masters the logical unit. 8

9 Data Protection RAID The SFA RAID stack provides protection against single, physical disk failures with RAID-1 or RAID-5 data protection as well as double Physical Disk failures through the use of high-speed RAID-6 protection. Both the SFA RAID 5 and 6 parity protection implementations use a rotating parity scheme. The RAID-5 implementation adds a parity chunk to every stripe using XOR. The RAID-6 implementation adds a P and Q chunk to every stripe where P and Q are calculated with Galois Field arithmetic. Particular attention has been paid to closing all of the write holes 1, the method for doing so goes beyond the scope of this paper. Chunk Stripe P for 0-7 Q for P for 8-15 Q for P for Q for Additional stripes that maximize the chunks on each member. Member Figure 5 - Example of RAID-6 RAIDset Layout 1 For RAID 5 and RAID 6, in the event of a system failure while there are active writes, the parity of a stripe may become inconsistent with the data. If this is not detected and repaired before a disk or block fails, data loss may ensue as incorrect parity will be used to reconstruct the missing block in that stripe. This potential vulnerability is sometimes known as the write hole. Battery-backed cache and similar techniques are commonly used to reduce the window of opportunity for this to occur. 9

10 A RAID set is implemented using an integral number of equal sized members, which are whole Physical Disks. The total number of RAID set members must be the number of RAID set data members plus parity members. A chunk is one or more sequential data blocks from a single RAID set member. Each member is made up of a sequence of chunks. A stripe consists of a set of chunks, the same ordinal chunk from each RAID set member. For RAID 6, two of the stripe s chunks are used for parity ( P and Q ) while the remaining chunks are used for logical disk data. The data and parity members are laid out as shown in Figure 5 to provide load balancing for both reads and writes. This is sometimes referred to as left symmetric. For normal reads, only the data members need to be read. Optionally, one parity disk ( P ) is read and the parity is checked, part of a feature called SATAssure which guards against silent data corruption which is within the realm of possibility and sporadically witnessed with SATA disk drive technology. Hot Spares The SFA OS provides pools of spare physical disks that can be automatically used to replace failed physical disks. By replacing a failed RAID set member automatically, the mean-time to-repair for the RAID set is minimized resulting in improved data reliability. Battery Backed Write-Back Cache SFA OS provides a write-back cache feature that is used to improve IO performance. Writeback cache data which has not been written to disk is preserved by maintaining power to the cache memory in the event of an AC mains failure long enough to copy the contents of the cache to stable storage. In addition, SFA OS is designed to tolerate a simultaneous AC mains failure and RAID software failure. Mirrored Write-Back Cache Currently, SFA OS provides the ability to mirror all write-back cache data such that the failure of a single controller will not result in data loss. A storage administrator can optionally turn off write-back cache mirroring for a RAID set (for higher performance) however data protection is reduced for logical units within that RAID set. Mirrored Transaction Journal RAID write holes are prevented by executing stripe updates as ACID (Atomicity, Consistency, Isolation and Durability) 2 transactions so that when they are interrupted by a power failure then they can be recovered from the transaction journal implemented within the write-back cache when power is restored. This journal is mirrored so that when a simultaneous power failure and controller hardware failure occurs, the surviving controller can recover the transactions. 2 In computer science, ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on thedata is called a transaction. An example of a transaction is a transfer of funds from one bank account to another, even though it might consist of multiple individual operations (such as debiting one account and crediting another). This brief definition was obtained from Wikipedia: 10

11 Metadata Mirrored n-ways SFA OS stores a copy of storage system metadata on 18 Physical Disks to minimize the likelihood that its metadata is lost or corrupted. SATAssure Silent Data Corruption Detection and Avoidance SATAssure is a trademarked name for techniques that detect and correct data errors made by physical disks. It s particularly valuable when using lower-cost spinning disk, such as SATA drives, which are designed with a lower bit-error-rate requirement than enterprise quality SAS disks. In SFA OS there are two levels of SATAssure: Parity Check On Read SATAssure: For the original implementation of SATAssure, SFA OS allows the administrator to specify whether SATAssure is turned on or off per RAID set. If enabled for a given RAID set, RAID parity will be checked on all reads. This method is referred to as Parity Check on Read (PCOR). In the event that the RAID parity is found to be bad, SFA OS takes steps to correct the data including retrying the reads and using P and Q to identify the bad data. Once the bad data is identified the correct data is generated from parity and the read is returned. Any bad data on physical disk is corrected in the process. When data is read as part of a write operation (e.g., in a read-modify-write), the parity is checked as part of the read operations. PCOR based SATAssure can have an effect on performance because every read and write involves every data member of the RAID set this performance impact varies with data access patterns. An IO pattern in which every IO is full-stripe-aligned (the IO size equals the stripe size and is aligned on a stripe boundary) naturally involves every data member of the RAID set and has minimal performance impact with PCOR SATAssure on. Sequential IO patterns, in which the read-ahead or write-back cache can turn non-stripe-aligned IOs into full-stripe aligned RAID set IOs, have minimal performance impact with PCOR SATAssure on. Small random reads performed with PCOR SATAssure enabled (reads that access fewer disks than the number in a full stripe) will suffer more degradation due to the requirement to read from all the disks in the stripe to check parity. Data Integrity Field SATAssure: Another approach to detecting and correcting Physical Disk errors is to store redundant information about the data in a form other than RAID parity. One approach is to store a hash (e.g., a CRC check) or Data Integrity Field (DIF) of each block s data, and then check this each time the block is read. Of course, the physical disk already stores a sophisticated Reed- Solomon code for each block that both detects and corrects errors, so having the RAID system store another hash may seem redundant but remember that the purpose of SFA SATAssure is to improve the undetected errors of low-cost physical disks. There are several advantages to performing data integrity verification with a DIF vs. the PCOR method alone. The first is that calculating checksums is far less intensive than calculating parity and hence results in significantly smaller levels of performance degradation. Additionally, the DIF method can easily detect and correct silent data corruption on mirrored (RAID 1) raid sets. Lastly, the DIF method has become accepted and standardized in the form of ANSI T10- DIF. This means it may be possible in a future version of SFA OS to emulate complete end-to-end data integrity checks even with SATA disk drives. 11

12 To improve SATAssure performance and provide additional data integrity checking, SFA OS includes this secondary DIF method for ensuring data integrity on SATA disks. This secondary method employs a 512-byte DIF block being inserted into the logical pool (raid set) space every 32K. The DIF block in comprised of 16-bit CRCs for each of the previous 64 data blocks. The extra space in the DIF block is reserved for proprietary metadata. When data is read from physical disks within a RAID set an additional 16-bit CRC on each block will calculated and checked against the stored value. If an error is detected then steps will be taken to correct the error using retries and RAID redundancy. The CRC information is cached to minimize the impact on performance. Storage System Efficiencies Partial Disk Rebuild When a disk does fail, the SFA OS tracks the changes made to a RAID set when a member Physical Disk becomes unavailable, and if that member becomes available again within a user-settable timeout then only the stripes that were modified while the member was missing are rebuilt. This minimizes the mean-time-to-repair for the RAID set and thus improves the data reliability of the RAID set while also limiting any performance impact of a drive repair. Real-time Adaptive Cache Technology (ReACT) Because the SFA10K-X performs at extreme levels in both IOPS and throughput it was desirable to achieve extreme performance in mixed workload scenarios. Given a logical unit, where data IO is comprised of both random IO and sequential IO, it is desirable to enable caching (and cache mirroring) for high IOPS performance. With cache mirroring enabled, sequential IO performance suffers by having to cross the inter-controller link. It also has the side effect of invalidating random IO cache as it fills the cache and displaces previously cached data. To remedy this situation, SFA OS employs the ReACT feature to intelligently cache or write-through data based on incoming write patterns. 12

13 Aligned I/O Single-Operation Parallelized Striped-Writes No Cache Mirroring Required for Fast Data P Q Unaligned I/O Write-Back Cache Mirrored Accelerated Write Performance Avoids RMW Performance Avoids RMW Performanc M Cache Mirror Figure 6 Optimizing Cache Utilization with ReACT With write-back cache enabled and ReACT disabled, the data written to a given logical disk with aligned full-stripe writes is cached in the write-back cache and mirrored to the partner controller. With ReACT enabled for a given logical disk, the data written to the logical disk with aligned full-stripe writes is not cached and is instead written directly to the physical disks (i.e., write-through). Either way, non-aligned writes are written to write-back cache (Figure 6). By enabling ReACT, applications that generate aligned full-stripe writes can achieve higher performance because write data is not cached and thus is not mirrored, resulting in greatly reduced inter-controller link traffic. Rebuild Priority SFA OS employs a tunable parameter per RAID set for rebuild priority. Adjusting this setting will cause the rebuild engine to use less or more system resources for the rebuild operation. This feature allows an administrator the flexibility to adjust rebuild priority in relation to overall system performance. A lower rebuild priority setting will consume less of these system resources which will allow the system to devote more resources to incoming IO. Conversely, it may be appropriate to increase rebuild priority to shorten rebuild time. Software Summary SFA OS on the SFA10K-X introduces new levels of extreme performance with several unique features. The scalable architecture is also very expandable. The SFA10K-X with SFA OS provides the foundation upon which new data management and block virtualization features will be built in future releases. The flexibility and architecture of SFA OS allows these new features to be developed quickly, with rapid evolution of features utilizing the same hardware platform. This evolution will provide long-term investment protection and enhances longevity of SFA based products. 13

14 SFA10K-X Hardware Architecture The last several years have seen significant improvements in multiple commodity components. As mentioned previously, processors are increasing in the number of computing cores and the speed of those cores. The processor and bus interconnects have evolved to speeds that were only available with proprietary designs just a short time ago. HyperTransport (HT) and Intel QuickPath Interconnect (QPI) have replaced slow Front Side Bus (FSB) technology with low latency, point-topoint links featuring revolutionary bi-directional transfer speeds. Now that both AMD and Intel have adopted the practice of integrating the memory controllers, memory access speeds are greatly increased and experience lower latency. The success of PCIe, now in its second generation, is poised to double in speed again with the adoption of PCIe Generation 3 potentially in Thus, nearly all the major components, busses and IO paths around commodity computing processors have greatly improved in just the last couple of years. Combining these processing and IO capabilities with current HBAs, HCAs and NICs in a unique configuration yields an extremely powerful storage hardware platform (Figure 3). RAID Processing A powerful storage hardware platform is useless without a tightly integrated software architecture that squeezes every bit of performance from the components and makes them work in a harmonious fashion. The Storage Fusion Architecture data integrity engine has been written from the ground up to be multi-threaded and highly parallelized to take maximum advantage of multi-core, multi-thread storage processors. Not only do various elements of the RAID stack run in parallel but there are two parallel instances of the storage engine: one in RAID processor (RP)0 and one in RP1 (Figure 7). Thus, the SFA10K-X actually has two parallel, multi-threaded RAID engines that work simultaneously in each controller for a total of 4 RAID processors across the redundant controller pair. Further, each RAID processor runs multiple threads that manage the SFA cache, data integrity calculations and IO movers. Thus, as the number of storage system cores are increased, additional parallel processes can be run simultaneously and both IOPS and throughput will increase accordingly. IO Channels and Architecture Powerful parallel RAID processors need to be able to handle massive amounts of IO on the front end (block interfaces) and the back end (disk drive pool). The SFA10K-X meets this challenge by providing each RAID processor with its own dedicated IO channels to Fibre Channel or InfiniBand host interfaces on the front end to balance performance to SAS disk-enclosure interfaces on the back end. A very high speed, low latency interconnect allows for data transfers between RAID processors if and when necessary. This arrangement allows the SFA10K-X to perform at extreme levels in throughput as data is streamlined from the host interfaces directly into RAID processors and out the back end to disks without having to contend for a shared IO bus. 14

15 Front End Block Interfaces, Up to 80 Gb/s Controller 0 SFA I/O Paths 60 Gb/s Front End Block Interfaces, Up to 80 Gb/s Controller 1 RPO High Speed, RP1 Inter-Controller Links RPO RP1 20 SAS x4 Links, 240 Gb/s Up to 1200 SAS, SATA or SSD Drives Figure 7 - SFA10K-X Streamlined IO Paths 20 SAS x4 Links, 240 Gb/s The ability to move data through the controller in a streamlined fashion is what also gives the SFA10K-X the ability to perform at extreme levels in IOPS. The ability to communicate via an unprecedented number of channels across multiple disks simultaneously is what allows the SFA10K-X to achieve 300,000 sustained IOPS to rotating media and over 800,000 sustained IOPS to SSD drives. Cache Extreme IOPS performance to disk is important but for small size, high IOPS data patterns where latency becomes the gating factor, cache is a necessity. The SFA10K-X offers high levels of mirrored cache, 16 GB total. Cache is implemented in DDR3 SDRAM memory for the lowest latency, highest performing cache. In the case of a power event, SFA10K-X utilizes a dedicated battery backup unit to hold up the controller while the un-flushed write-back cache data is transferred to internal, nonvolatile, mirrored storage. Back End Disk Connectivity Overall, the design of the SFA10K-X hardware is about balance. The extreme performance capabilities of the host ports are facilitated by a streamlined IO path directly to the back end disks. The massive 480Gb/s internal SAS network not only serves the IOPS and throughput needs of the controller itself, but has ample headroom for additional IO operations internal to the architecture. This headroom allows disk rebuild IO to coexist with application service as there is plenty of bandwidth for both to occur simultaneously. By providing 40 x4 SAS channels to serve 1200 disk drives, the ratio of drives per channel is decreased. This arrangement allows for more commands to be queued per drive as well as providing ample bandwidth for high IOPS SSD drives. 15

16 Additionally, because all of the disk enclosures are best-practice configured either as 5, 10 or 20 enclosures per SFA couplet the SFA10K-X has the ability to RAID storage enclosures for high levels of enclosure fault tolerance. Using an 8+2 RAID configuration, the SFA controller can lose up to 4 drive enclosures (or 2/10ths of the system resources) on an active system and still deliver full access to online data. Hardware Summary This unique combination of high-performance storage processing technologies, married to an advanced, optimized software architecture not only make the SFA10K-X the leader in IOPS and throughput, but more importantly, serves as a high-density, fault-tolerant storage foundation for evolutionary SFA OS advances far into the future. SFA OS forms the basis for the next generation of ultra-high performance block storage. This unique hardware and software combination also lends itself to more interesting possibilities, further differentiating SFA OS. SFA OS and Embedded File Systems The decision to marry unique and specialized software to industry standard hardware components in SFA lends itself to an innovation that goes far beyond block storage services. SFA OS allows for embedding applications within the SFA10K-E. The applications that make the most sense to embed (initially) are those that would benefit the most from reduced latency and high bandwidth: clustered file system services. Thus, in its first iteration the SFA10K-E has the capability to embed the Lustre file system (the OSSs) or IBM s GPFS (the NSDs). Embedding the file system servers within the storage device reduces the number of servers, infrastructure requirements and network connections which in turn reduces complexity, power consumption and cooling requirements. At the same time, it streamlines IO and reduces latency by removing data hops and eliminates wasteful storage protocol conversion. Embedded Application Capability SFA OS utilizes virtualization software to allow for applications to be run inside the storage device. Various methods of memory and resource protection are employed to guard the block RAID functionality and ensure overall system resources are allocated in a secure and controlled fashion. SFA OS acts as a hypervisor, utilizing technologies such as ccnuma and KVM to control processor, core, memory, IO and virtual disk allocations. This ensures that applications that run in the embedded space cannot affect the block RAID process memory space and that the applications only utilize processing and IO resources they have been assigned. Virtualization technologies are usually associated with performance degradation, not improvements in performance. Though SFA OS utilizes software and hardware virtualization, special care and development has been undertaken to ensure not only as little performance degradation as possible, but to produce an environment that offers enhanced performance. This is largely achieved with two distinct methods. 16

17 PCIe Device Dedication In the case of Lustre and GPFS, Infiniband or Ethernet HCAs are commonly used as the frontend interfaces to the file system servers. Normally, virtualization technologies share hardware devices such as HCA s among virtual machines; slowing access for all and requiring virtual device drivers. SFA overcomes these traditional bottlenecks by dedicating PCIe devices directly to virtual machines. In the course of virtual machine initialization, the PCIe address space for the PCIe device in question is remapped to the virtual machine space. When the virtual machine boots its associated OS, it sees the PCIe device (in this case, the Infiniband or Ethernet card) natively, as if it was running on a physical machine. This allows the use of the HCA s native software drivers, eliminating any need for a virtual device. Utilizing this method, virtual machines running inside the SFA10K-E have been able to achieve external bandwidth of 960MB/s on a 10Gb/s Ethernet card utilizing the qperf Linux utility. Virtual Disk Driver By dedicating PCIe devices directly to virtual machines there is no need to modify OS images or supply highly specialized virtual IO devices. Virtual machines running inside an SFA10K-E enjoy nearly native speed access to HCAs. The remaining hurdle is access to the virtual disks (LUNs) served by the block RAID services side of SFA from the OS running inside the virtual machine. This access is achieved with the addition of a small, lightweight kernel module to the Linux image running inside the virtual machine. This driver presents virtual disks assigned to the virtual machine as standard Linux block devices under /dev. What looks like a standard block device is actually a shared memory interface between the virtual machine and the block RAID services managed by SFA OS. As shown in Fig. 8, what was a dedicated server, an FC HBA, an FC switch and another FC HBA is reduced down to a direct memory interface at processor bus speeds. For writes from the OS to the device, data in memory is copied from the virtual machine space to the RAID space before it is manipulated by the RAID engine. This prevents the virtual machine from having write access to the RAID memory space. IO Paths Latency Traditional Client HCA/NIC Switch HCA/NIC Server HBA SAN Switch HBA Storage SFA 10000E Client HCA/NIC Switch HCA/NIC Application Efficiently Place Data Directly Into SFA Memory Elimination of Protocol Conversion Reduces Latency, Improves IOPS Performance Figure 8 - IO Path Reduction in SFA10K-E Embedded File Systems 17

18 On reads of the virtual disk device, the block RAID engine reads from disk, places the data in memory and passes a shared pointer to the virtual disk driver so that the virtual machine can read directly from the RAID engine without a memory copy. Thus, IOPS intensive loads (such as file system metadata operations) can enjoy greatly reduced latency. The removal of SCSI protocol overhead, Fibre Channel interconnects, SAN switches and interface conversion reduces storage response times and lets the embedded file system take full advantage of the SFA10K-E high performance random IO capabilities. This I/O streamlining in turn improves performance for transaction intensive workloads. Reduction in Equipment, Infrastructure and Complexity By combining virtualization, an advanced new block RAID architecture and cutting edge hardware technology it s possible to achieve high performance while at the same time reducing complexity. As shown in Fig. 9, using the Lustre file system as an example, SFA technology can result in as much as a 10 to 1 reduction in the number of managed systems depending on deployment. Traditional Lustre Deployment to Achieve 5 GB/s SFA 10000E, Embedded EXA Scaler, 5 GB/s Lustre Clients Lustre Clients IB or 10 Gig-E Lustre MGS IB or 10 Gig-E Lustre OSS Nodes SAN Fibre Channel Lustre MDS Nodes Active Fibre Channel Standby External RAID Array Disk Drives Disk Drives 10 managed systems: 2+ RAID Arrays 7 Servers 1 Fibre Switch 1 Scalable Storage Building Block Incorporating Lustre Servers & the SFA Transactional Bandwidth Storage Engine Figure 9 - Reduction in Equipment, Infrastructure and Complexity with SFA10K-E 18

19 While clustered file system services were the first choice for applications to be embedded within SFA, virtually any application that would benefit from ultra-low latency access to block disk devices could benefit from being embedded. As processors increase in speed and the number of cores, the possibilities for what can be embedded increase along with the performance of the block RAID engine. SFA10K Family: Summary Disk storage systems simply enable computational output to reside on non-volatile media as opposed to being dependent on more volatile media (RAM). Thus, its purpose is to serve compute clients rapidly with performance and integrity predictability. To the storage environment, it should not matter if systems are processing data for a Fortune 500 enterprise, predicting global weather patterns or simulating high-energy physics. What does matter is that the technology used in those computers is becoming ever more multi-threaded. The resulting effecting on storage systems is the simultaneous read and write of multiple files whose access histogram is seen as mixed or highly transactional to the supporting storage systems. Thus, storage systems must adapt to changing data patterns to accommodate serving multithreaded compute clients without bottlenecking application IO. SFA10K-X meets the challenges of changing data patterns by offering extreme performance in both IOPS and throughput. A unique combination of an entirely new storage operating system (SFA OS) and best of breed storage processing components have made a system architecture that performs well at both ends of the IO spectrum a reality. In addition to meeting the mixed IO requirements of the most intensive compute environments, SFA OS also allows for embedding clustered file system services directly inside the block storage device. This capability results in the reduction of servers, infrastructure and complexity. In addition to reducing the complexity of scale-out storage, Storage Fusion Architecture can also increase storage responsiveness by removing latency-injecting elements from the storage cluster. Now that DataDirect Networks move to high speed storage processing systems is complete, rapid development of additional features is possible: advanced storage virtualization capabilities, data management features and advanced application encapsulation resulting in infrastructure and complexity reduction. The SFA10K family is a leader in performance in both IOPS and throughput but Storage Fusion Architecture ensures enduring leadership as it readily adapts to and benefits from advances in the processing components it utilizes. 19

20 DDN About Us DataDirect Networks (DDN) is the world s largest privately held information storage company. We are the leading provider of data storage and processing solutions and services, that enable content-rich and high growth IT environments to achieve the highest levels of systems scalability, efficiency and simplicity. DDN enables enterprises to extract value and deliver results from their information. Our customers include the world s leading online content and social networking providers, high performance cloud and grid computing, life sciences, media production organizations and security & intelligence organizations. Deployed in thousands of mission critical environments worldwide, DDN s solutions have been designed, engineered and proven in the world s most scalable data centers, to ensure competitive business advantage for today s information powered enterprise. For more information, go to www. or call TERABYTE. Version 10/11 20

SFA Product Line. ddn.com. DDN Whitepaper. High Performance Solutions for Big Data: Setting the Bar in Both Bandwidth & lops

SFA Product Line. ddn.com. DDN Whitepaper. High Performance Solutions for Big Data: Setting the Bar in Both Bandwidth & lops DDN Whitepaper SFA Product Line High Performance Solutions for Big Data: Setting the Bar in Both Bandwidth & lops Table of Contents Abstract 3 Introduction 3 SFA12KX 5 SFA12KX Storage OS Architecture 6

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

ANY SURVEILLANCE, ANYWHERE, ANYTIME

ANY SURVEILLANCE, ANYWHERE, ANYTIME ANY SURVEILLANCE, ANYWHERE, ANYTIME WHITEPAPER DDN Storage Powers Next Generation Video Surveillance Infrastructure INTRODUCTION Over the past decade, the world has seen tremendous growth in the use of

More information

HadoopTM Analytics DDN

HadoopTM Analytics DDN DDN Solution Brief Accelerate> HadoopTM Analytics with the SFA Big Data Platform Organizations that need to extract value from all data can leverage the award winning SFA platform to really accelerate

More information

Using Multipathing Technology to Achieve a High Availability Solution

Using Multipathing Technology to Achieve a High Availability Solution Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend

More information

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption

More information

Architecting a High Performance Storage System

Architecting a High Performance Storage System WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to

More information

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...

More information

New Storage System Solutions

New Storage System Solutions New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems

More information

With DDN Big Data Storage

With DDN Big Data Storage DDN Solution Brief Accelerate > ISR With DDN Big Data Storage The Way to Capture and Analyze the Growing Amount of Data Created by New Technologies 2012 DataDirect Networks. All Rights Reserved. The Big

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

IBM ^ xseries ServeRAID Technology

IBM ^ xseries ServeRAID Technology IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010 Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system Christian Clémençon (EPFL-DIT)  4 April 2013 GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Object storage in Cloud Computing and Embedded Processing

Object storage in Cloud Computing and Embedded Processing Object storage in Cloud Computing and Embedded Processing Jan Jitze Krol Systems Engineer DDN We Accelerate Information Insight DDN is a Leader in Massively Scalable Platforms and Solutions for Big Data

More information

Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

More information

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014 Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,

More information

Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems

Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems Fault Tolerance & Reliability CDA 5140 Chapter 3 RAID & Sample Commercial FT Systems - basic concept in these, as with codes, is redundancy to allow system to continue operation even if some components

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

WHITEPAPER: Understanding Pillar Axiom Data Protection Options WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel The Benefit of Migrating from 4Gb to 8Gb Fibre Channel Notices The information in this document is subject to change without notice. While every effort has been made to ensure that all information in this

More information

General Parallel File System (GPFS) Native RAID For 100,000-Disk Petascale Systems

General Parallel File System (GPFS) Native RAID For 100,000-Disk Petascale Systems General Parallel File System (GPFS) Native RAID For 100,000-Disk Petascale Systems Veera Deenadhayalan IBM Almaden Research Center 2011 IBM Corporation Hard Disk Rates Are Lagging There have been recent

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview Pivot3 Desktop Virtualization Appliances vstac VDI Technology Overview February 2012 Pivot3 Desktop Virtualization Technology Overview Table of Contents Executive Summary... 3 The Pivot3 VDI Appliance...

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

SAS Business Analytics. Base SAS for SAS 9.2

SAS Business Analytics. Base SAS for SAS 9.2 Performance & Scalability of SAS Business Analytics on an NEC Express5800/A1080a (Intel Xeon 7500 series-based Platform) using Red Hat Enterprise Linux 5 SAS Business Analytics Base SAS for SAS 9.2 Red

More information

Windows 8 SMB 2.2 File Sharing Performance

Windows 8 SMB 2.2 File Sharing Performance Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated Taking Linux File and Storage Systems into the Future Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated 1 Overview Going Bigger Going Faster Support for New Hardware Current Areas

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution

DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution Tested with: ESRP Storage Version 3.0 Tested Date: Content DELL TM PowerEdge TM T610... 1 500 Mailbox Resiliency

More information

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance

More information

Intel RAID Controllers

Intel RAID Controllers Intel RAID Controllers Best Practices White Paper April, 2008 Enterprise Platforms and Services Division - Marketing Revision History Date Revision Number April, 2008 1.0 Initial release. Modifications

More information

Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise

Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise Introducing Unisys All in One software based weather platform designed to reduce server space, streamline operations, consolidate

More information

StorPool Distributed Storage Software Technical Overview

StorPool Distributed Storage Software Technical Overview StorPool Distributed Storage Software Technical Overview StorPool 2015 Page 1 of 8 StorPool Overview StorPool is distributed storage software. It pools the attached storage (hard disks or SSDs) of standard

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report

More information

A Survey of Shared File Systems

A Survey of Shared File Systems Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

RAID technology and IBM TotalStorage NAS products

RAID technology and IBM TotalStorage NAS products IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID

More information

RAID 5 rebuild performance in ProLiant

RAID 5 rebuild performance in ProLiant RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild

More information

Performance Report Modular RAID for PRIMERGY

Performance Report Modular RAID for PRIMERGY Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Definition of RAID Levels

Definition of RAID Levels RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

More information

white paper A CASE FOR VIRTUAL RAID ADAPTERS Beyond Software RAID

white paper A CASE FOR VIRTUAL RAID ADAPTERS Beyond Software RAID white paper A CASE FOR VIRTUAL RAID ADAPTERS Beyond Software RAID Table of Contents 1. Abstract...3 2. Storage Configurations...4 3. RAID Implementation...4 4. Software RAID.4-5 5. Hardware RAID Adapters...6

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor

More information

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS TECHNOLOGY BRIEF August 1999 Compaq Computer Corporation Prepared by ISSD Technology Communications CONTENTS Executive Summary 1 Introduction 3 Subsystem Technology 3 Processor 3 SCSI Chip4 PCI Bridge

More information

MS Exchange Server Acceleration

MS Exchange Server Acceleration White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Cray DVS: Data Virtualization Service

Cray DVS: Data Virtualization Service Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with

More information

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This

More information

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the

More information

IBM System x GPFS Storage Server

IBM System x GPFS Storage Server IBM System x GPFS Storage Crispin Keable Technical Computing Architect 1 IBM Technical Computing comprehensive portfolio uniquely addresses supercomputing and mainstream client needs Technical Computing

More information

Using RAID6 for Advanced Data Protection

Using RAID6 for Advanced Data Protection Using RAI6 for Advanced ata Protection 2006 Infortrend Corporation. All rights reserved. Table of Contents The Challenge of Fault Tolerance... 3 A Compelling Technology: RAI6... 3 Parity... 4 Why Use RAI6...

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...

More information

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

enabling Ultra-High Bandwidth Scalable SSDs with HLnand www.hlnand.com enabling Ultra-High Bandwidth Scalable SSDs with HLnand May 2013 2 Enabling Ultra-High Bandwidth Scalable SSDs with HLNAND INTRODUCTION Solid State Drives (SSDs) are available in a wide

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Price/performance Modern Memory Hierarchy

Price/performance Modern Memory Hierarchy Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

Scaling from Datacenter to Client

Scaling from Datacenter to Client Scaling from Datacenter to Client KeunSoo Jo Sr. Manager Memory Product Planning Samsung Semiconductor Audio-Visual Sponsor Outline SSD Market Overview & Trends - Enterprise What brought us to NVMe Technology

More information

An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine

An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart

More information

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Solution Brief July 2014. All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Solution Brief July 2014 All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux Traditional SAN storage systems cannot keep up with growing application performance needs.

More information

21 st Century Storage What s New and What s Changing

21 st Century Storage What s New and What s Changing 21 st Century Storage What s New and What s Changing Randy Kerns Senior Strategist Evaluator Group Overview New technologies in storage - Continued evolution - Each has great economic value - Differing

More information

Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?

Hardware RAID vs. Software RAID: Which Implementation is Best for my Application? STORAGE SOLUTIONS WHITE PAPER Hardware vs. Software : Which Implementation is Best for my Application? Contents Introduction...1 What is?...1 Software...1 Software Implementations...1 Hardware...2 Hardware

More information

How To Make A Backup System More Efficient

How To Make A Backup System More Efficient Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,

More information

Software-defined Storage at the Speed of Flash

Software-defined Storage at the Speed of Flash TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing

More information

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products that

More information

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays Executive Summary Microsoft SQL has evolved beyond serving simple workgroups to a platform delivering sophisticated

More information

Intel RAID SSD Cache Controller RCS25ZB040

Intel RAID SSD Cache Controller RCS25ZB040 SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster

More information

Building a Flash Fabric

Building a Flash Fabric Introduction Storage Area Networks dominate today s enterprise data centers. These specialized networks use fibre channel switches and Host Bus Adapters (HBAs) to connect to storage arrays. With software,

More information

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant

More information

NetApp High-Performance Computing Solution for Lustre: Solution Guide

NetApp High-Performance Computing Solution for Lustre: Solution Guide Technical Report NetApp High-Performance Computing Solution for Lustre: Solution Guide Robert Lai, NetApp August 2012 TR-3997 TABLE OF CONTENTS 1 Introduction... 5 1.1 NetApp HPC Solution for Lustre Introduction...5

More information

Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst

Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst ESG Lab Review Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst Abstract: This ESG Lab review documents the storage

More information

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman. WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures

More information

Distribution One Server Requirements

Distribution One Server Requirements Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

More information

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Moving Virtual Storage to the Cloud

Moving Virtual Storage to the Cloud Moving Virtual Storage to the Cloud White Paper Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage www.parallels.com Table of Contents Overview... 3 Understanding the Storage

More information

SUN STORAGE F5100 FLASH ARRAY

SUN STORAGE F5100 FLASH ARRAY SUN STORAGE F5100 FLASH ARRAY KEY FEATURES ACCELERATING DATABASE PERFORMANCE WITH THE WORLD S FASTEST SOLID- STATE FLASH ARRAY Unprecedented performance, power, and space efficiency World s first flash

More information

Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server

Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale

More information

HGST Virident Solutions 2.0

HGST Virident Solutions 2.0 Brochure HGST Virident Solutions 2.0 Software Modules HGST Virident Share: Shared access from multiple servers HGST Virident HA: Synchronous replication between servers HGST Virident ClusterCache: Clustered

More information

IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.

IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand. IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Moving Virtual Storage to the Cloud Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Table of Contents Overview... 1 Understanding the Storage Problem... 1 What Makes

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information