Flash Storage Management Algorithm. for Large-Scale Hybrid Storage Systems

Size: px
Start display at page:

Download "Flash Storage Management Algorithm. for Large-Scale Hybrid Storage Systems"

Transcription

1 Flash Storage Management Algorithm for Large-Scale Hybrid Storage Systems by Abdullah Hasan Aldhamin B.Sc. (Honours), King Fahd University of Petroleum and Minerals, 2007 a Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the School of Computing Science Faculty of Applied Sciences c Abdullah Hasan Aldhamin 2014 SIMON FRASER UNIVERSITY Fall 2014 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced without authorization under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.

2 APPROVAL Name: Degree: Title of Thesis: Abdullah Hasan Aldhamin Master of Science Flash Storage Management Algorithm for Large-Scale Hybrid Storage Systems Examining Committee: Dr. Joseph Peters Chair Dr. Mohamed Hefeeda, Senior Supervisor, Professor Dr. Raed Al Shaikh, Supervisor, IT Systems Specialist, Saudi Aramco Dr. Jiangchuan (JC) Liu, Internal Examiner, Professor Date Approved: November 27, 2014 ii

3 Partial Copyright Licence iii

4 Abstract As the computing platforms have evolved over the years, the associated storage requirements have also followed a rapid change in terms of performance, cost, availability and scalability. In addition, computing elements, mainly the CPU, are continuing to scale and develop at a higher pace compared to storage systems. Flash-based solid-state drives (SSDs) have led to significant innovations in storage systems architecture. However, due to their special design and architecture characteristics, they are not considered as cost-effective and immediate replacement of traditional hard-disk drives for large-scale storage systems. Thus, how we can best utilize this technology to build an efficient hybrid storage system remains a research challenge. We propose a real-time dynamic programming algorithm, called Flash Storage Management (FSM) algorithm, to address this challenge. The FSM algorithm can run in two modes: online and offline modes. We implement the proposed FSM algorithm in an event-driven simulator. To compare the FSM algorithm, we implement a simulator for the closest algorithms in the literature, which is Hystor. Our evaluation results indicate that the proposed algorithm outperforms Hystor, especially for read-intensive workloads. For example, the online FSM algorithm achieves a hit ratio of 75% when using SSD size of 30% of the workload, which outperforms Hystor by more than 20%. Keywords: large-scale hybrid storage, solid-state drive, flash storage management; iv

5 To my holy love, my wife Zinab and my daughters, Fatimah and Maryam v

6 He who travels in the search of knowledge, to him God shows the way of Paradise. Prophet Mohammad vi

7 Acknowledgments Foremost, I would like to express my sincere gratitude to my senior supervisor, Dr. Mohamed Hefeeda, for mentoring me in the past two years with his patience and immense knowledge. Also, I would like to express my gratitude to Dr. Raed Al Shaikh, my supervisor, and Dr. Jiangchuan (JC) Liu, my thesis examiner, for being on my committee and reviewing this thesis. I also would like to thank Dr. Joseph Peters for taking the time to chair my thesis defense. I would also like to express my deepest thanks to Dr. Tamir Hegazy for his time and immense knowledge. For all I have learned from him and for his continuous help and support in all stages of this thesis. I would like to thank all my colleagues at Network Systems Lab for their support and help. It was my honor to work with these talented people. I would also like to thank my friends for their encouragement and constant support. A big thank-you goes to my friend Christopher Morris and his family for the support and encouragement they have given me. I am indebted to my family for their love, encouragement, and endless support. I owe my deepest gratitude to my parents and my parents-in-law. I owe my loving thanks to my wife, Zinab, and my daughters Fatimah and Maryam. They have lost a lot during my graduate career at SFU, and more especially during my illness. Without their encouragement and understanding it would be impossible for me to finish this thesis. I will never forget what I owe them, and my eternal gratitude to them cannot be expressed by words. This thesis is dedicated to them. vii

8 Contents Approval Partial Copyright License Abstract Dedication Quotation Acknowledgments Contents List of Tables List of Figures ii iii iv v vi vii viii x xi 1 Introduction Overview Problem Statement and Thesis Contributions Thesis Organization Background and Related Work Storage Systems Solid-State Drive: Architecture viii

9 2.3 Solid-State Drives: Characteristics and Implications Understanding Application I/O Characteristics Markov Decision Process Related Work Gordon High Performance System: SSD-only Cluster Griffin Hybrid Storage System: Flash as an End-Point HybridStore Hystor Proposed Flash Storage Management Algorithm Overview Proposed Flash Storage Management (FSM) Algorithm Analysis and Discussion Optimality Time Complexity Space Complexity Applications of Flash Storage Management Algorithm Database Management Systems Hybrid SAN for Virtualized Environment Evaluation Experimental Setup Experimental Results Cost Function Convergence Hit Ratio Data Swapping Comparison with Hystor Summary Conclusions and Future Work Conclusions Future Work Bibliography 64 ix

10 List of Tables 2.1 Gordon technical specifications List of symbols used in the thesis x

11 List of Figures 1.1 Integration of SSDs in storage systems Architecture of the Storage Area Network (SAN) Architecture of the Network Attached Storage (NAS) Logical architecture of flash-based solid state drives Hystor architecture Main architecture FSM Flow from input into performance measurement output Online flash storage management algorithm flow chart Offline flash storage management algorithm flow chart Database management system components Cost Function Convergence for the Flash Storage Management (FSM) Algorithm using read-intensive workload Cost Function Convergence for the Flash Storage Management (FSM) Algorithm using write-intensive workload FSM hit ratio for read-intensive workload: Online vs. Offline FSM hit ratio for write-intensive: Online vs. Offline FSM hit ratio for composite workload: Online vs. Offline FSM data swapping for read-intensive workload: Online vs. Offline FSM data swapping for write-intensive workload: Online vs. Offline FSM data swapping for composite workload: Online vs. Offline Block Table Data Structure used in Hystor Hit ratio comparison for a read-intensive workload: Hystor vs. Online FSM Hit ratio comparison for a write-intensive workload: Hystor vs. Online FSM 57 xi

12 4.12 Hit ratio comparison for a composite workload: Hystor vs. Online FSM Data swapping for read-intensive workload: Hystor vs. Online FSM Data swapping for write-intensive workload: Hystor vs. Online FSM Data swapping for a composite workload: Hystor vs. Online FSM xii

13 Chapter 1 Introduction 1.1 Overview Storage systems are essential components of the computing hierarchy, ranging from mobile devices and personal computers all the way through high performance computing (HPC) and cloud computing. Similar to other computing elements, storage technology has enjoyed considerable growth since the first drive was introduced in 1956 [34]. This is in part facilitated by the steady evolution of storage interfaces, e.g., Small Computer System Interface (SCSI) and AT Attachment (ATA). Since the first disk, disks have grown by over six orders of magnitude in density and over four orders in performance. As the computing platforms have evolved over the years, the associated storage requirements have also followed a rapid change in terms of performance, cost, availability and scalability. In addition, the computing environments have also changed. As organizations grow in business, a new transformation is needed to ensure a cost-effective computing environment is available to meet the new computational challenges. Furthermore, it has become essential to share data among different entities within and outside organizations. Thus, options for connecting computers to storage systems have increased dramatically in a short time, driven by the rapid change in applications development. Storage networking offers significant capabilities and flexibilities not previously available. Also, there is no single networking approach that solves all problems or optimizes all variables. There are trade-offs in cost, ease-of-management, performance, distance and maturity, to name a few of these variables. Thus, multiple storage network alternatives coexist within the same organization to serve different needs. It is important to note that the different storage architectures are the results 1

14 CHAPTER 1. INTRODUCTION 2 from different storage interfaces. That is, the key principal that differentiates between storage architectures is the interface between the storage and the computing system, which determines the functionality supported by the devices. Storage media devices have evolved drastically from being large trunks with the capacity to hold a few kilobytes of data, to microchips able to hold a few terabytes of data. Hard disk drives (HDDs) have evolved in both size and I/O speed. However, due to their mechanical design nature they have reached levels where it cannot exceed without compromising other factors such as power consumption or performance. Thus, storage devices continue to be a concern as processors get more advanced and data acquisition rate is increasing. Solid-state drives (SSDs) are a newer type of storage that contains no moving parts allowing data to be stored on flash memory technology. But, SSDs come in smaller sizes compared to HDDs and they have more complicated design internals that brings several concerns to adopt this new technology at large-scale levels. 1.2 Problem Statement and Thesis Contributions Solid-state drives have revolutionized the storage technology with their performance characteristics, such as their random read/write I/O performance, low energy consumption and compact size. They are used in consumer devices, such as smart phones, tablets and personal computers. In recent years there has been a great interest in adopting this technology in large-scale storage systems to improve the storage I/O performance, and hence enhance applications runtime. With all the attractive features of this technology, replacing the conventional hard-disk drives with SSDs may not be a feasible option for large-scale storage systems. This is mainly due to their small capacity, limited lifetime, and their relatively high cost. Thus, a more practical solution is to use SSDs in a hybrid system, such that the features of this technology are best utilized. The key challenges are to decide what role should the SSD have in the storage hierarchy, and what data should be stored in it. Figure 1.1 summarizes the possible scenarios to integrate SSDs in storage systems. Should we implement a flash-only storage or consider a hybrid solution that takes advantage of the huge capacity of the HDD and the performance advantages of the SSD? If we consider a hybrid approach, then what role should the SSD be given? In the case of a hybrid solution, it can be used as the major storage media to store all data and serve all I/O requests from

15 CHAPTER 1. INTRODUCTION 3 the SSDs. Finally, if it were to be used as an accelerator, then should they be used as a cache for the reads or as write buffer, or both? SSD in Storage System SSD-only System Hybrid System End-Point Accelerator Write Buffer Read Cache Figure 1.1: Integration of SSDs in storage systems. The problem addressed in this thesis can be stated as follows: given a hybrid storage system composed of HDDs as the primary storage media and SSDs as the secondary storage media. Data is stored in HDDs and can be cached in the SSD. Design an algorithm to allocate or cache the data in SSDs such that it maximizes data accessed from SSDs in order to improve the performance of the storage system. To solve this problem, this thesis makes the following contributions: a) we propose a pre-allocation flash storage management algorithm based on real-time dynamic programming technique for large-scale hybrid storage systems, where SSDs are used as a secondary storage device to improve the storage I/O performance. Our algorithm efficiently uses a pre-collected I/O trace log of the target application to train the storage system by: (a) extracting the most performance critical data chunks; and (b) ensuring that the most performance critical data are accessed from SSDs, and consequently improving the application performance. b) we implement an event-driven simulator to assess the performance of our proposed algorithm,

16 CHAPTER 1. INTRODUCTION 4 c) we conduct extensive experiments with different I/O workloads. Our simulation results show that our proposed algorithm consistently improves the performance with different system parameters, d) we compare our proposed algorithm to the closest work in the literature, i.e., Hystor. Our experimental results confirm that our algorithm outperforms Hystor. 1.3 Thesis Organization The rest of this thesis is organized as follows. Chapter 2 provides some background on flashbased solid-state drives architecture, characteristics and features, and their applications, especially hybrid storage systems. It also summarizes the related work in the literature. In Chapter 3, we describe the proposed flash storage management algorithm and the considered system model. We discuss and analyze the algorithm complexity and consider possible applications for our algorithm. Evaluation using simulation of the proposed algorithm is given in Chapter 4. We conclude the thesis in Chapter 5.

17 Chapter 2 Background and Related Work This chapter first provides background on storage systems, architecture of SSDs, SSDs salient features and their implications, I/O characteristics of applications. We then describe the Markov Decision Process, which is used in solving the considered problem. Finally, we survey the related work in the literature. 2.1 Storage Systems There are three key types of storage networking models, namely: direct attached storage (DAS), network attached storage (NAS), and storage area network (SAN) [44, 21, 13]. The most traditional architecture of storage is the direct attached storage. As the name suggests, the storage media, usually a disk or a tape drive, is directly attached through a cable to the computer system. It is generally restricted to access by a single host. This architecture is optimized for a single isolated computing, and usually resides inside the server enclosure. In addition, it provides users with block-level access to data through the SCSI protocol. This architecture, however, does not seem to be a viable option for today s enterprise computing environment, where there is huge amount of data to be shared among users. Thus, several standards are used to enable these capabilities through networking the storage system. One of the most used network architecture is the storage area network, SAN, which is a dedicated network for storage devices and the processors that access those devices [19, 37]. SANs, as illustrated in Figure 2.1, are usually built using Fiber Channel (FC) technology. Similar to DAS, SANs also provide block I/O access to storage through the SCSI protocol. In order to allow SCSI I/O commands to be sent over TCP/IP based network, the Internet 5

18 CHAPTER 2. BACKGROUND AND RELATED WORK 6 SCSI, commonly known as iscsi, protocol is used to facilitate this feature. Storage devices in the SAN system are managed through a SAN server. SAN provides several advantages to computing environment. First, it improves the storage utilization through consolidating the scattered storage devices into fewer devices. Second, it enables sharing among users. Third, one of the most important features of SAN is that it provides scalability, since multiple SAN devices can be made available as a single pool of storage to all processors on the SAN. Additionally, a centralized management for the whole SAN minimizes the overhead for the system administrators. On the other hand, there are certain aspects of the SAN that need careful consideration to ensure a cost-effective implementation. Since SANs use a specialized network the initial cost to implement SAN will generally be higher than other options. Furthermore, SANs require specialized hardware and software to provide many of their potential benefits. Also, an organization must add new skills to manage this sophisticated technology. Yet, some analysis [44, 20, 8, 43] suggest that SANs have lower total cost of ownership (TCO) in the long-term compared to the alternative connectivity approaches. Another widely used connectivity approach for storage systems is the network-attached storage, commonly known as NAS [44, 37, 13]. A NAS device, as illustrated in Figure 2.2, consists of an integrated processor called controller or the NAS head and disk storage. The NAS device is connected to a standard TCP/IP network and provides access to users through file-access/file-sharing protocols such as network file system (NFS) or the common Internet file system (CIFS). The controller in the NAS device manages the files located on disks, and issues block I/O requests to the disks to fulfill the file I/O read and write requests it receives. There are few distinctions between SAN and NAS approaches. First, the main difference is the storage interface; SAN enables block I/O access whereas NAS provides file I/O access, which is a higher-level request that specifies the file to be accessed, an offset into the file, and a number of bytes to read or write beginning at that offset. This, in turn, suggests that file I/O has no knowledge about disk volume or disk sectors. Second, unlike SAN, NAS device resides on a network that might be shared with non-storage traffic. There are several benefits that NAS brings giving it the advantage for widespread adoption among other options [37, 44]. Probably the most notable feature is its ease of installation. It does not require special hardware, or specific skills for installation and management. Additionally, it can be installed on an existing LAN/WAN network. Another salient feature of NAS appliances is the snapshot backup, which makes backup copies of data onto

19 CHAPTER 2. BACKGROUND AND RELATED WORK 7 TCP/IP LAN Fiber Channel Switch I/O Subsystem Figure 2.1: Architecture of the Storage Area Network (SAN).

20 CHAPTER 2. BACKGROUND AND RELATED WORK 8 TCP/IP LAN NAS Head I/O Subsystem Figure 2.2: Architecture of the Network Attached Storage (NAS).

21 CHAPTER 2. BACKGROUND AND RELATED WORK 9 tapes, for example, while minimizing application downtown. Similar facilities are available for SAN as well, but they require specific customization and special storage management packages. Furthermore, NAS allows capacity within the appliance to be pooled. That is, the NAS device is configured as one or more file systems, each residing on a specified set of disk volumes. All users accessing the same file system are assigned space within it on demand. This is more efficient than buying users their own disk volumes, as in DAS scenario, which often leads to some users having too much capacity and others too little. On the other hand, however, because NAS pooling resides within a NAS appliance, there is little if any sharing of resources across multiple appliances. This raises costs and management complexity as the number of NAS nodes increases. In contrast, an advantage of a SAN is that all devices on a SAN can be pooled. Another NAS capability is file sharing. Although it can be configured on SAN, it requires further specialized software. Both SAN and NAS utilize RAID systems that are connected to an interconnect network. The choice of RAID scheme is dependent on target security and redundancy level. RAID 6, known commercially as RAID DP, is probably the most widely used scheme on networked storages. It provides block-level striping with double distributed parity, which makes it very efficient to provide fault tolerance of up to two disks in the same RAID. Other computing elements, mainly the CPU, are continuing to scale and develop at a higher pace compared to storage systems [6]. Even as storage bandwidth increased (but slower than compute speed), storage performance improved only marginally. This has added even more challenges to close the gap between the processor and the storage in order to ensure a scalable computing infrastructure, especially when dealing with today s complex computational problems. Storage media devices, such as hard disk drives, are the core part of any storage system. Due to the nature of their mechanical design, improving their performance requires speeding up their rotations, which, in turn, would require more mechanical power. Flash-based solid-state drives have led to significant innovations in storage systems architecture. They have been instrumental in modern consumer devices such as personal computers, smart phones, as well as tablets. This is mainly due to their design, which is based on electronic chips, and small size. They use significantly less power, and provide higher performance. Furthermore, in recent years SSDs have been introduced in large-scale storage systems. However, the challenging problem is how to integrate this interesting technology in the storage hierarchy in a way that best utilizes its features.

22 CHAPTER 2. BACKGROUND AND RELATED WORK Solid-State Drive: Architecture Among the necessities of large-scale storage systems is to deliver high and reliable performance especially for the type of applications that require high IO per second (IOPS) while at the same time maintaining the cost associated as low as possible. One of the direct performance bottlenecks in these systems is the hard disk drives. Solid state drives bring solutions to solve the performance challenges, e.g., their performance for the random access, but they also bring some weaknesses to the storage systems. In addition, they support the same interfaces as conventional hard disk drives, both physically, e.g., SATA interface and logically, e.g., Logical Block Address, which makes switching to SSDs intuitive. SSD DRAM Channel 0 Flash Chip 0 Flash Chip 1 Flash Chip 2 Flash Chip m Host Interface Host Interface Controller DRAM Controller Processor Channel Controller 0 Channel Controller 1 Channel 1 Flash Chip 0 Flash Chip 1 Flash Chip 2 Flash Chip m DMA Channel Controller n Channel n Flash Chip 0 Flash Chip 1 Flash Chip 2 Flash Chip m SSD Controller Figure 2.3: Logical architecture of flash-based solid state drives. Flash-based SSDs are typically built on an array of flash memory packages [42], as shown in Figure 2.3. As logic pages are stripped over flash chips, high bandwidth can be achieved through parallel access. In addition, a serial I/O bus connects the flash memory package to a controller. The controller receives and processes requests from the host through connection interface and issues commands to transfer data from/to the flash memory array. When a read request arrives, the data is first read from flash memory into the register of the plan, then the data is shifted via the serial bus to the controller. A write request is processed with similar steps but in a reverse order. In some designs, SSDs are equipped with an external RAM buffer to cache data or metadata [28]. Flash SSDs are made of semiconductor chips. Based on the chip size, flash SSDs are classified into three types: Single-Level Cell (SLC), Multi-Level Cell (MLC) and Triple- Level Cell (TLC). The choice of the cell size imposes several design decisions, namely:

23 CHAPTER 2. BACKGROUND AND RELATED WORK 11 density, reliability and performance. Hence, SLCs chip can store 1-bit, MLCs 2-bit, and TLCs 3-bit each. More bits per chip increases the capacity of the device. However, this has a drawback concerning the device reliability. That is, more bits per chip leads to more frequent garbage collection and hence the device tends to wear out faster. In addition, performance is impacted by the number of bits per cell, due to the increased overhead to manage data within cells [25]. The core component of the SSD is the Flash Translation Layer [5, 28], which is implemented in the SSD controller to emulate a hard disk and exposes an array of logic blocks to the upper level components. Its significant role in SSD design has led to several sophisticated mechanisms to optimize the SSD performance and lifetime. Flash Translation Layer performs three major tasks in SSDs. First, logical block mapping, to organize write operations mapping from logical block address to a physical block address. Unlike conventional hard disk drives, writes in SSDs cannot be performed in place and each write of a logic page is actually conducted on a different physical page. Different approaches have been implemented for address mapping using different granularities, e.g., block vs. page [26]. In most cases, however, Flash Translation Layers are designed to use a hybrid approach, such that block-level mapping is used to map block as data blocks and page-level mapping to manage small set of logical blocks. This choice is efficient as a buffer to accept incoming write requests. Second, Flash Translation Layer facilitates garbage collection operations. In SSDs, a block must be erased (programmed) before it is reused, which makes SSDs usually overprovisioned with a certain amount of clean blocks as an allocation pool. In case of writes, previously occupied physical page is invalidated by updating the metadata, and new data can be appended to new clean block from the pool, without having to synchronously performing erase operation. When running out of clean blocks in the pool, a garbage collector scans the log and recycles invalidated pages. Third, another important task of Flash Translation Layer is the wear leveling. Due to the locality in most workloads, writes are often performed over a subset of blocks. Thus, some flash memory blocks may be frequently overwritten and tend to wear out earlier than others. Flash Translation Layer implements wear-leveling algorithms to even out writes over flash memory blocks. The efficiency of the algorithms used directly impacts the lifetime of the flash SSD.

24 CHAPTER 2. BACKGROUND AND RELATED WORK Solid-State Drives: Characteristics and Implications The design nature of the flash-based solid-state drives introduces several layers of challenges [23, 5]. Thus, an important question that needs to be addressed before making a switch from HDD to SSD is where and how the SSD can bring a cost-effective solution to improve the performance of existing storage systems, considering the I/O characteristics discussed in Section 2.4. Feng Chen et al [17] provide a comprehensive study on SSDs performance for various workloads trying to answer several questions and verifying common notions about this technology. Their work indicates that SSDs, as expected, show huge performance improvement for random reads over HDDs: their results shows that SSDs achieve 31 times higher throughput. Interestingly, sequential reads performance on SSDs had also overtaken HDDs. Moreover, SSDs show a non-uniform read access latencies, an indication that the performance is correlated with workload access pattern. In contrast, write accesses to SSDs are almost independent of workload access pattern. This is associated with the fact that write requests will write to on-drive cache, regardless of the access pattern. Similar to HDDs, SSDs on-drive cache plays a significant role on SSDs design to boost the write performance. In addition, internal drive-specific operations can significantly impact the performance of the SSD. Most of these internal operations, such as garbage collection and wear leveling, are caused by write requests. Their results demonstrate that read operations are affected more than writes, since reads will most likely be served directly from the flash rather than the cache. For example, after inserting 10ms interval between writes, reads with lower latencies increases from 57% to 74% in one case and from 78% to 95% in another. This shows how much impact internal operations could have on the foreground jobs. Also, as the workload randomness increases, the write performance of SSDs suffers a significant degradation, which may lead to further implications on reads performance as well. Furthermore, the actions performed by file systems, e.g., journaling and metadata synchronization, worsen the flash write amplification problem [33], because it produces extra write operations that contribute to the wear of the SSD. Several important implications could be drawn from these results and observations. First, read access latency on SSDs is not always uniform and correlated to the workload access pattern. Second, write performance on SSDs with on-drive caches imposes a uniform behavior regardless of the workload access performance. Third, internal drive-specific

25 CHAPTER 2. BACKGROUND AND RELATED WORK 13 operations driven by incoming write requests can have significant impact on the overall performance of SSDs, especially on reads. Fourth, internal fragmentation could also affect the storage performance. Fifth, the more the frequency of write operations to SSDs produce extra write requests by the file system which increases the problem of write amplification and thus impact the wear of the SSD [33]. These implications show that SSDs may not necessarily win a performance comparison against HDDs. SSDs have a strong performance lead over HDDs when dealing with random read workload, but the performance gap gets narrower with sequential workloads. Narayanan et al, [35] provide a cost analysis model of the SSD/HDD trade-off for different large data center servers workloads. Their results show that replacing HDDs with SSDs is not a cost-effective solution for most of the workloads due capacity of SSDs per dollar. Although the cost of SSDs continues to decline, but we still think that for enterprise workloads this remains a concern towards building an all-ssd storage solution. On the other hand, the benefits of hybrid storage systems can also be very limited and less efficient if not used the right way considering target workloads and performance correlation with access patterns. 2.4 Understanding Application I/O Characteristics It is important to understand the target application I/O characteristics in order to be able to optimize its performance. There are several important I/O characteristics that need to be considered. First, we need to understand the amount of I/O load, i.e., how much I/O the application is doing and how it is changing over time. Second, we need to consider the I/O request size. There are applications that do small block I/Os, while others do large streaming I/O in which megabytes of data are transferred at a time. It is important to understand the I/O request size because it affects other optimization metrics. For example, applications with large I/O may impact the latency if large I/O holdup the storage port for longer periods. Third, we need to identify the access pattern. That is, is it a read workload, write workload, or a mixed read/write workload? In addition, we need to understand the locality of access of the application I/O. There is a significant compounding factor that is dramatically accelerating the demands on storage performance, which is the increasing randomization of I/O operations. The primary cause of this increased randomization trend is consolidation, which means that as more applications and systems are consolidated and

26 CHAPTER 2. BACKGROUND AND RELATED WORK 14 virtualized, they no longer enjoy the sense of dedicated storage systems and disks. In fact, the move toward improving data center efficiency through consolidation has been ongoing for the past decade [41]. 2.5 Markov Decision Process Markov decision process (MDP) is a discrete time stochastic process designed to provide a mathematical framework to model decision-making problems that are partially random and/or partially under control [40]. It is widely used when studying optimization problems solved with dynamic programming and/or reinforcement learning. At each time step, the process is in a certain state S i, and the controller can perform an action a available for that state. The process, in turn, responds to the action by moving into a new state S j, and resulting in a corresponding cost. Thus, there is a probability for the process to move into S j, and is influenced by the chosen action. The Markov decision process framework is defined by a 4-tuple process (S, A, T, C), such that, S is the state space, A is the action space, T is a 2-D transition matrix, and C is the cost incurred of moving from state S i into state S j giving by applying action a (Si ). Each element, P ij, of the matrix expresses the probability for the process to move from state S i at time t into state S j at time t + 1. The ultimate target of solving a Markovian decision process is to find an optimal action, or policy. A policy is a function that guides the system to apply the best possible action given the current circumstances. An action is optimal if, with respect to the space of all possible actions, it minimizes the expected discounted total cost from all states. There are different approaches to solve a Markovian decision process: linear programming and dynamic programming. In this thesis, we take the dynamic programming approach to solve our optimization problem for a hybrid storage system. 2.6 Related Work There are several proposals and commercial implementations already in place that utilize SSDs in storage systems [7]. For example, in 2008 two leading on-line search engine service providers, google.com and baidu.com, both announced their plans to migrate existing hard disk based storage systems to a platform built on SSDs, though it is not clear whether they

27 CHAPTER 2. BACKGROUND AND RELATED WORK 15 have completely migrated all HDDs into SSDs or they built hybrid systems [18, 45]. In the following, we describe several storage systems proposed in the literature Gordon High Performance System: SSD-only Cluster Motivated by the need for a data-centric and power-efficient high performance computing for the massive growing amount of data, Gordon [14, 15] is designed with SSD-only storage system for data-intensive computing. Gordon uses a total of 300TB of flash-based SSD storage [4, 3]. It is designed with three important goals to achieve, mainly: 1. reduce the performance gap between processing and storage I/O in a large-scale data-intensive computing, 2. significantly improve the computing performance, 3. improve power efficiency in such environment. The Gordon cluster consists of 1024 compute nodes and 64 I/O nodes connected through QDR InfiniBand interconnect switches, in which each switch connects 16 compute nodes and one I/O node [2]. Table 2.1 summarizes the technical specifications of Gordon HPC cluster. Gordon s SSD-based storage system is the vital component to achieve those objectives. In order to utilize SSDs to build this system, a major consideration to the design has to take place. Thus, one of the major design aspects of Gordon is the Flash Translation Layer that manages the flash storage. Flash Translation Layer manages the wear-leveling operations as well as maintaining the logical block address. In addition, it schedules the accesses to the storage array to provide a high-performance access. In Gordon, a new Flash Translation Layer design is implemented in order to exploit as much parallelism as possible to improve the performance. Gordon s Flash Translation Layer is implemented in the flash array controller, which provides hardware interface to the array. The original Flash Translation Layer design that Gordon has extended performs the program operations at a write point, which is a pointer to a page of the flash memory, within the array. A major limitation of this Flash Translation Layer implementation is that it only allows for a single write point, i.e. no parallel operations can be done. The authors implement three approaches to overcome this limitation. The first is to force dynamic parallelism between accesses to flash array. To achieve this, Gordons Flash Translation Layer supports multiple write points, where each write point is assigned with a sequence number to maintain operations order. So, once a logical block address is written to a particular write point, future writes much also go to the same write point, or to the next write point with larger sequence number.

28 CHAPTER 2. BACKGROUND AND RELATED WORK 16 System Component Configuration Compute Nodes Sockets 2 Cores 16 Clock Speed 2.6 GHz Flop Speed 333 Gflop/s Memory Capacity 64 GB Memory Bandwidth 85 GB I/O Nodes Sockets 2 Cores 16 Clock Speed 2.6 GHz Memory Capacity 64 GB Memory Bandwidth 85 GB Flash Memory 4.8 TB Full System Total Compute Nodes 1024 Total Compute Cores Peak Performance 341 Tflop/s Total Memory 64 TB Total Memory Bandwidth 87 TB Total Flash Memory 300 TB QDR InfiniBand Interconnect Topology 3D Torus Link Bandwidth 8 GB (bidirectional) I/O Subsystem File Systems NFS, Lustre Storage Capacity (usable) 1.5 PB I/O Bandwidth 100 GB/s Table 2.1: Gordon technical specifications.

29 CHAPTER 2. BACKGROUND AND RELATED WORK 17 The second is to manage flash array at larger granularity by combining physical pages from several dies into super-pages. They examined three ways to create super-pages: horizontal striping, vertical striping and a combination of the two called 2D striping. In horizontal striping, a physical page in a super-page is mapped to a separate bus, which allows access to physical pages to progress in parallel. In this case, the number of buses determines the size of the super-page. In vertical striping, physical pages in a super-page are mapped to separate bus. In this scenario, number of dies in each bus limits the size of the vertical super-page. The 2D striping, in turn, combines both horizontal and vertical in order to create even larger super-pages. Thus, in this case, the flash array is divided into rectangular sets that forms a part of the same horizontal and vertical striping, allowing the Flash Translation Layer to assign one write point for each set. The 2D striping will provide better management overhead as well as minimal memory requirement to store the logical block address table. However, because of the strict design, which is bounded by the target application, choosing a large super-page may not result in better performance for all workloads. This is especially noticeable when choosing a large super-page size for small sequential reads. A bypassing mechanism is implemented in the Flash Translation Layer to merge incoming read requests with pending requests to the same page. To evaluate Gordon, they used various benchmarks that use MapReduce [1] for parallel computation. An important finding concerning the power consumption is that using SSDs eliminates the majority of storage system s idle power by saving over 68%, hence allowing the design to take advantage of more efficient processors. In addition, the results show that Gordon runs faster than disk-based clusters by a factor of 1.5x and its performance is 2.5x more efficient. Nonetheless, Gordons benefits come at a cost. The high cost associated with this system may prevent a widespread adoption of such design. However, the authors argue that for systems that need to store very large amount of data and build with cheap commodity hardware justifies the cost and performance trade-off. Although some of the design aspects of Gordon might be of great interest to apply, the cost associated of building such a system might restrict the adoption, especially when considering general-purpose servers. In addition, it requires major system modification within the SSDs with new Flash Translation Layer algorithm. This adds extra overhead for maintenance or upgrade. Even if we implement Gordon with hybrid storage approach, in which HDDs are used to store file system replicas, there will be a concern related to the

30 CHAPTER 2. BACKGROUND AND RELATED WORK 18 recovery time of a failed file system, and whether the power efficiency will be compromised Griffin Hybrid Storage System: Flash as an End-Point Griffin hybrid storage [24] is designed with SSDs as the main storage medium and supported with HDDs. Although single-level cell (SLC) flash-based SSDs offer excellent endurance measures, usually 100K program/erase cycles or more, and excellent IOPS, the cost associated might be a big concern for adoption. Thus, the authors suggest using the multi-level cell (MLC) SSDs instead. MLCs, in addition to their cheaper price compared to SLCs, offer larger capacity. However, a key disadvantage with this type of SSDs is their low endurance, averaging 10K program/erase cycles. Thus, a clear trade-off has to be considered, cost vs. endurance (and capacity), especially when considering SSDs as the permanent store. Thus, in order to solve the endurance issue without compromising the cost, Griffin utilizes HDDs as a write buffer for the SSDs to minimizes the number of writes going to SSDs, and hence extends the SSDs lifetime. Griffin appends writes into a log-structured HDD cache and periodically flushes it to SSD, preferably before subsequent reads. The log-structured brings two key advantages. First, it is known that HDDs are slow to handle writes especially when performing random writes. However, HDDs operate at their best when handling sequential workloads. Thus, the log-structured cache will exploit the potential of the HDDs. Second, this aggregation will enhance the lifetime not only by minimizing the number of writes, but also by minimizing the write-amplification. Write-amplification is a hidden phenomenon affecting the SSDs performance as well as their endurance by introducing further write operations in order to synchronize the file system metadata and more device-level write operations. An important challenge faced by Griffin design is to be able to extend the SSD lifetime without compromising the read performance from SSD. That is, to be able to extend the lifetime of the SSD while ensuring data are ready in SSD for read requests. Thus, Griffin s design has to address two things in order to retain the performance: data has to be kept in HDDs as long as possible to buffer overwrites, and at the same time data must be flushed from HDD in order to avoid expensive reads. In other words, the caching policy has to answer two questions: what data to cache and how long to cache it for. The choice of a policy is directly dependent on the characteristics of the workloads. Based on the analysis they performed on various desktop and server workloads collected from previous works, the following conclusions are made:

31 CHAPTER 2. BACKGROUND AND RELATED WORK Desktop and server workloads encounter high degree of overwrites, which makes an idealized HDD write cache achieve significant write savings. 2. There is a high degree of spatial locality in block overwrites, which helps achieving high write savings while writing fewer blocks, and in turn reduces the possibility of read penalty, i.e., reads from the HDD. 3. Blocks that are most heavily written, rarely receive reads. 4. There are two important things that help to determine data retention duration in HDD: intervals between writes and subsequent overwrites are typically short for desktop workloads; the time interval between a block write and its consecutive read is large. This suggests that data can be retained in HDDs for long enough period. In addition, Griffin implements two caching policies as follows: 1. Full caching: this policy caches every write request. This is the default policy. 2. Selective caching: in this policy only the most overwritten blocks are cached. To implement this policy, the overwrite ratio is calculated, which is the ratio of number of overwrites to the number of writes a block receives. The block is written to the HDD if the overwrite ration exceeds a defined overwrite threshold HybridStore Motivated by SSD design trade-offs, the HybridStore [31] is designed to integrate SSDs as a balancing, i.e., accelerating, storage unit to improve the performance and provide service differentiation given the cost constraints. To achieve these goals, HybridStore consists of two key elements: 1. A capacity-planning model called HybridPlan, which is a long-term resource provisioner. 2. HybridDyn that handles the performance guarantees. HybridPlan A key component of the HybridStore system is the planning model. The main objective of the HybridStore model is to minimize the deployment and operation cost, in terms of dollars

32 CHAPTER 2. BACKGROUND AND RELATED WORK 20 ($), subject to a combination of both performance and re-deployment limits. Actions occur at coarse time-scale, i.e., months to years. In other words, the capacity-planning problem is formulated as a model to minimize the cost of acquiring and installing HybridStore while meeting the targeted workload performance and useful lifetime budget constraints. The performance budget uses IOPS as a metric for measurement, whereas the lifetime budget represents the time between successive capacity planning decisions and equipment provisioning. That said, the cost of a HybridStore can be expressed as follows: Cost HybridStore = Cost Installation + Cost Recurring The Cost Installation refers installation cost of devices, whereas the Cost Recurring covers the cost of associated power, cooling and all maintenance. The constraints of performance and lifetime budget equations are classified in terms of data features, devices capacities and bandwidth as well as the SSD lifetime. An essential part of the capacity planning formulation is the analysis of the target workload to extract and understand the hidden characteristics. Common features, such as total size, read-to-write ratio and request arrival rate, are first gathered by dividing the workloads to sub-workloads called classes. To achieve this, the entire logical address space of the workload is divided into fixed-size chunks and mapped to different classes. Then, classes are analyzed to find commonality within workload streams. HybridDyn HybridDyn is a statistical model for the performance of SSD and HDD to make dynamic request partitioning decision. In addition, it employs data management techniques within SSD. HybridDyn consists of several components: performance prediction module, which is the core component of HybridDyn, a fragmentation buster and write regulator. Performance Prediction Module for SSD The core component of the HybridDyn is the performance prediction model based on a very long history. This model is used to take actions targeted to improve the SSD performance, subject to current workload status. For example, a large number of random writes might cause fragmentation over time and the resulting garbage collector invocation would degrade the performance of requests that follow. So, this component uses history of crucial workload characteristics that play major role in predicting the performance changes. These features include: a) average read to write

33 CHAPTER 2. BACKGROUND AND RELATED WORK 21 ratio, b) spatial locality, c) request inter-arrival time, and d) current request size. Using this information, the performance prediction model uses the system response time as a measure of the SSD performance, which is a function of the device service times and workload. Fragmentation Busting This component is responsible of maintaining the performance level and minimizes the affects of large random write requests to the SSD. As the number of random writes allowed into SSD increases, data fragmentation on the device increase accordingly, which leads to more frequent garbage collection invocation overhead that degrades the overall performance. Fragmentation busting is a flushing technique implemented to prevent or minimize fragmented zones. It relies on the device controller to decide on which page on the device is causing the fragmentation to be marked for flushing. Write Regulation Write regulation is a technique used to manage the SSD lifetime by handling sudden unanticipated bursts in requests. Although workload characteristics are part of the HybridPlan module of the system, it still imposes a challenge due its unpredictability, which is related to the dynamic nature of the workload that is affected by several factors. The write regulator monitors the write rate of blocks and once an irregular burst takes place and action to re-balance is performed. It controls the writes sent to the flash by over-riding decisions made by the performance module by randomly choosing requests being sent to the SSD and directing them back to disk. There are several important observations that can be drawn from their HybridPlan solver results for various workloads. First, some of the sequential read-intensive workloads can be satisfied with slow HDDs. This is contributed to low request arrival rate and hence low bandwidth. Second, similar to read-intensive case, some sequential write-intensive workloads can be satisfied with slower HDDs if it does not encounter a need for high IOPS. However, as the IOPS increases a need for an SLC class SSD becomes mandatory, according to HybridPlan, to satisfy the performance budget. Third, it is known that HDDs performs better when handling sequential requests, whereas SSDs are far more superior when handling random requests. That said, HybridPlan shows that there is a strong correlation between increased workload randomness and large number of SSDs to satisfy the performance constraints. Additionally, the random read-intensive workloads require about 3 times more of SSD storage to meet the IOPS. Thus, random workloads will be more costly compared to sequential ones because they will need larger number of SSD devices. On the other hand, the write-intensive workloads have to be considered carefully in the lifetime budget. The

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2 Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...

More information

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010 Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,

More information

Flash-optimized Data Progression

Flash-optimized Data Progression A Dell white paper Howard Shoobe, Storage Enterprise Technologist John Shirley, Product Management Dan Bock, Product Management Table of contents Executive summary... 3 What is different about Dell Compellent

More information

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect High-Performance SSD-Based RAID Storage Madhukar Gunjan Chakhaiyar Product Test Architect 1 Agenda HDD based RAID Performance-HDD based RAID Storage Dynamics driving to SSD based RAID Storage Evolution

More information

EMC XTREMIO EXECUTIVE OVERVIEW

EMC XTREMIO EXECUTIVE OVERVIEW EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying

More information

Accelerating Server Storage Performance on Lenovo ThinkServer

Accelerating Server Storage Performance on Lenovo ThinkServer Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER

More information

Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays

Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays 1 Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays Farzaneh Rajaei Salmasi Hossein Asadi Majid GhasemiGol rajaei@ce.sharif.edu asadi@sharif.edu ghasemigol@ce.sharif.edu

More information

Everything you need to know about flash storage performance

Everything you need to know about flash storage performance Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Benchmarking Cassandra on Violin

Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract

More information

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 Executive Summary Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 October 21, 2014 What s inside Windows Server 2012 fully leverages today s computing, network, and storage

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

How it can benefit your enterprise. Dejan Kocic Netapp

How it can benefit your enterprise. Dejan Kocic Netapp PRESENTATION Case for flash TITLE GOES storage HERE How it can benefit your enterprise Dejan Kocic Netapp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Storage Technologies for Video Surveillance

Storage Technologies for Video Surveillance The surveillance industry continues to transition from analog to digital. This transition is taking place on two fronts how the images are captured and how they are stored. The way surveillance images

More information

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,

More information

FAS6200 Cluster Delivers Exceptional Block I/O Performance with Low Latency

FAS6200 Cluster Delivers Exceptional Block I/O Performance with Low Latency FAS6200 Cluster Delivers Exceptional Block I/O Performance with Low Latency Dimitris Krekoukias Systems Engineer NetApp Data ONTAP 8 software operating in Cluster-Mode is the industry's only unified, scale-out

More information

Accelerating Applications and File Systems with Solid State Storage. Jacob Farmer, Cambridge Computer

Accelerating Applications and File Systems with Solid State Storage. Jacob Farmer, Cambridge Computer Accelerating Applications and File Systems with Solid State Storage Jacob Farmer, Cambridge Computer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise

More information

Understanding endurance and performance characteristics of HP solid state drives

Understanding endurance and performance characteristics of HP solid state drives Understanding endurance and performance characteristics of HP solid state drives Technology brief Introduction... 2 SSD endurance... 2 An introduction to endurance... 2 NAND organization... 2 SLC versus

More information

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

ioscale: The Holy Grail for Hyperscale

ioscale: The Holy Grail for Hyperscale ioscale: The Holy Grail for Hyperscale The New World of Hyperscale Hyperscale describes new cloud computing deployments where hundreds or thousands of distributed servers support millions of remote, often

More information

Intel RAID SSD Cache Controller RCS25ZB040

Intel RAID SSD Cache Controller RCS25ZB040 SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

enabling Ultra-High Bandwidth Scalable SSDs with HLnand www.hlnand.com enabling Ultra-High Bandwidth Scalable SSDs with HLnand May 2013 2 Enabling Ultra-High Bandwidth Scalable SSDs with HLNAND INTRODUCTION Solid State Drives (SSDs) are available in a wide

More information

OBJECTIVE ANALYSIS WHITE PAPER MATCH FLASH. TO THE PROCESSOR Why Multithreading Requires Parallelized Flash ATCHING

OBJECTIVE ANALYSIS WHITE PAPER MATCH FLASH. TO THE PROCESSOR Why Multithreading Requires Parallelized Flash ATCHING OBJECTIVE ANALYSIS WHITE PAPER MATCH ATCHING FLASH TO THE PROCESSOR Why Multithreading Requires Parallelized Flash T he computing community is at an important juncture: flash memory is now generally accepted

More information

Advanced Knowledge and Understanding of Industrial Data Storage

Advanced Knowledge and Understanding of Industrial Data Storage Dec. 3 rd 2013 Advanced Knowledge and Understanding of Industrial Data Storage By Jesse Chuang, Senior Software Manager, Advantech With the popularity of computers and networks, most enterprises and organizations

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance Alex Ho, Product Manager Innodisk Corporation Outline Innodisk Introduction Industry Trend & Challenge

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Don t Let RAID Raid the Lifetime of Your SSD Array

Don t Let RAID Raid the Lifetime of Your SSD Array Don t Let RAID Raid the Lifetime of Your SSD Array Sangwhan Moon Texas A&M University A. L. Narasimha Reddy Texas A&M University Abstract Parity protection at system level is typically employed to compose

More information

Solid State Storage in Massive Data Environments Erik Eyberg

Solid State Storage in Massive Data Environments Erik Eyberg Solid State Storage in Massive Data Environments Erik Eyberg Senior Analyst Texas Memory Systems, Inc. Agenda Taxonomy Performance Considerations Reliability Considerations Q&A Solid State Storage Taxonomy

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Flash Memory Technology in Enterprise Storage

Flash Memory Technology in Enterprise Storage NETAPP WHITE PAPER Flash Memory Technology in Enterprise Storage Flexible Choices to Optimize Performance Mark Woods and Amit Shah, NetApp November 2008 WP-7061-1008 EXECUTIVE SUMMARY Solid state drives

More information

Spatial Data Management over Flash Memory

Spatial Data Management over Flash Memory Spatial Data Management over Flash Memory Ioannis Koltsidas 1 and Stratis D. Viglas 2 1 IBM Research, Zurich, Switzerland iko@zurich.ibm.com 2 School of Informatics, University of Edinburgh, UK sviglas@inf.ed.ac.uk

More information

Solid State Drive (SSD) FAQ

Solid State Drive (SSD) FAQ Solid State Drive (SSD) FAQ Santosh Kumar Rajesh Vijayaraghavan O c t o b e r 2 0 1 1 List of Questions Why SSD? Why Dell SSD? What are the types of SSDs? What are the best Use cases & applications for

More information

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion

More information

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance

More information

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1 Flash 101 Violin Memory Switzerland Violin Memory Inc. Proprietary 1 Agenda - What is Flash? - What is the difference between Flash types? - Why are SSD solutions different from Flash Storage Arrays? -

More information

SSD Performance Tips: Avoid The Write Cliff

SSD Performance Tips: Avoid The Write Cliff ebook 100% KBs/sec 12% GBs Written SSD Performance Tips: Avoid The Write Cliff An Inexpensive and Highly Effective Method to Keep SSD Performance at 100% Through Content Locality Caching Share this ebook

More information

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY White Paper CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY DVTel Latitude NVMS performance using EMC Isilon storage arrays Correct sizing for storage in a DVTel Latitude physical security

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

The Data Placement Challenge

The Data Placement Challenge The Data Placement Challenge Entire Dataset Applications Active Data Lowest $/IOP Highest throughput Lowest latency 10-20% Right Place Right Cost Right Time 100% 2 2 What s Driving the AST Discussion?

More information

SOLID STATE DRIVES AND PARALLEL STORAGE

SOLID STATE DRIVES AND PARALLEL STORAGE SOLID STATE DRIVES AND PARALLEL STORAGE White paper JANUARY 2013 1.888.PANASAS www.panasas.com Overview Solid State Drives (SSDs) have been touted for some time as a disruptive technology in the storage

More information

Technologies Supporting Evolution of SSDs

Technologies Supporting Evolution of SSDs Technologies Supporting Evolution of SSDs By TSUCHIYA Kenji Notebook PCs equipped with solid-state drives (SSDs), featuring shock and vibration durability due to the lack of moving parts, appeared on the

More information

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA

More information

Automated Data-Aware Tiering

Automated Data-Aware Tiering Automated Data-Aware Tiering White Paper Drobo s revolutionary new breakthrough technology automates the provisioning, deployment, and performance acceleration for a fast tier of SSD storage in the Drobo

More information

Indexing on Solid State Drives based on Flash Memory

Indexing on Solid State Drives based on Flash Memory Indexing on Solid State Drives based on Flash Memory Florian Keusch MASTER S THESIS Systems Group Department of Computer Science ETH Zurich http://www.systems.ethz.ch/ September 2008 - March 2009 Supervised

More information

MS Exchange Server Acceleration

MS Exchange Server Acceleration White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba

More information

How To Make A Backup System More Efficient

How To Make A Backup System More Efficient Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,

More information

All-Flash Storage Solution for SAP HANA:

All-Flash Storage Solution for SAP HANA: All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Preface 3 Why SanDisk?

More information

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It

More information

Flash In The Enterprise

Flash In The Enterprise Flash In The Enterprise Technology and Market Overview Chris M Evans, Langton Blue Ltd Architecting IT January 2014 Doc ID: AI1401-01S Table of Contents The Need for Flash Storage... 3 IOPS Density...

More information

All-Flash Arrays: Not Just for the Top Tier Anymore

All-Flash Arrays: Not Just for the Top Tier Anymore All-Flash Arrays: Not Just for the Top Tier Anymore Falling prices, new technology make allflash arrays a fit for more financial, life sciences and healthcare applications EXECUTIVE SUMMARY Real-time financial

More information

Software-defined Storage Architecture for Analytics Computing

Software-defined Storage Architecture for Analytics Computing Software-defined Storage Architecture for Analytics Computing Arati Joshi Performance Engineering Colin Eldridge File System Engineering Carlos Carrero Product Management June 2015 Reference Architecture

More information

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial

More information

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and

More information

Nexenta Performance Scaling for Speed and Cost

Nexenta Performance Scaling for Speed and Cost Nexenta Performance Scaling for Speed and Cost Key Features Optimize Performance Optimize Performance NexentaStor improves performance for all workloads by adopting commodity components and leveraging

More information

Flash for Databases. September 22, 2015 Peter Zaitsev Percona

Flash for Databases. September 22, 2015 Peter Zaitsev Percona Flash for Databases September 22, 2015 Peter Zaitsev Percona In this Presentation Flash technology overview Review some of the available technology What does this mean for databases? Specific opportunities

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

The Technologies & Architectures. President, Demartek

The Technologies & Architectures. President, Demartek Deep Dive on Solid State t Storage The Technologies & Architectures Dennis Martin Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers,

More information

Microsoft SQL Server 2014 Fast Track

Microsoft SQL Server 2014 Fast Track Microsoft SQL Server 2014 Fast Track 34-TB Certified Data Warehouse 103-TB Maximum User Data Tegile Systems Solution Review 2U Design: Featuring Tegile T3800 All-Flash Storage Array http:// www.tegile.com/solutiuons/sql

More information

Make A Right Choice -NAND Flash As Cache And Beyond

Make A Right Choice -NAND Flash As Cache And Beyond Make A Right Choice -NAND Flash As Cache And Beyond Simon Huang Technical Product Manager simon.huang@supertalent.com Super Talent Technology December, 2012 Release 1.01 www.supertalent.com Legal Disclaimer

More information

The Pitfalls of Deploying Solid-State Drive RAIDs

The Pitfalls of Deploying Solid-State Drive RAIDs The Pitfalls of Deploying Solid-State Drive RAIDs Nikolaus Jeremic 1, Gero Mühl 1, Anselm Busse 2 and Jan Richling 2 Architecture of Application Systems Group 1 Faculty of Computer Science and Electrical

More information

Data Center Solutions

Data Center Solutions Data Center Solutions Systems, software and hardware solutions you can trust With over 25 years of storage innovation, SanDisk is a global flash technology leader. At SanDisk, we re expanding the possibilities

More information

FLASH GAINS GROUND AS ENTERPRISE STORAGE OPTION

FLASH GAINS GROUND AS ENTERPRISE STORAGE OPTION FLASH GAINS GROUND AS ENTERPRISE STORAGE OPTION With new management functions placing it closer to parity with hard drives, as well as new economies, flash is gaining traction as a standard media for mainstream

More information

AirWave 7.7. Server Sizing Guide

AirWave 7.7. Server Sizing Guide AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba

More information

NetApp FAS Hybrid Array Flash Efficiency. Silverton Consulting, Inc. StorInt Briefing

NetApp FAS Hybrid Array Flash Efficiency. Silverton Consulting, Inc. StorInt Briefing NetApp FAS Hybrid Array Flash Efficiency Silverton Consulting, Inc. StorInt Briefing PAGE 2 OF 7 Introduction Hybrid storage arrays (storage systems with both disk and flash capacity) have become commonplace

More information

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung)

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung) Using MLC NAND in Datacenters (a.k.a. Using Client SSD Technology in Datacenters) Tony Roug, Intel Principal Engineer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA.

More information

Introduction to Gluster. Versions 3.0.x

Introduction to Gluster. Versions 3.0.x Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster

More information

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

Integrating Flash-based SSDs into the Storage Stack

Integrating Flash-based SSDs into the Storage Stack Integrating Flash-based SSDs into the Storage Stack Raja Appuswamy, David C. van Moolenbroek, Andrew S. Tanenbaum Vrije Universiteit, Amsterdam April 19, 2012 Introduction: Hardware Landscape $/GB of flash

More information

The Economics of Intelligent Hybrid Storage. An Enmotus White Paper Sep 2014

The Economics of Intelligent Hybrid Storage. An Enmotus White Paper Sep 2014 The Economics of Intelligent Hybrid Storage An Enmotus White Paper Sep 2014 SUMMARY Solid State Storage is no longer the storage of the future. It can be found in high- end data centers, in the servers

More information

An Exploration of Hybrid Hard Disk Designs Using an Extensible Simulator

An Exploration of Hybrid Hard Disk Designs Using an Extensible Simulator An Exploration of Hybrid Hard Disk Designs Using an Extensible Simulator Pavan Konanki Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment

More information

89 Fifth Avenue, 7th Floor. New York, NY 10003. www.theedison.com 212.367.7400. White Paper. HP 3PAR Adaptive Flash Cache: A Competitive Comparison

89 Fifth Avenue, 7th Floor. New York, NY 10003. www.theedison.com 212.367.7400. White Paper. HP 3PAR Adaptive Flash Cache: A Competitive Comparison 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper HP 3PAR Adaptive Flash Cache: A Competitive Comparison Printed in the United States of America Copyright 2014 Edison

More information

WHITE PAPER. Drobo TM Hybrid Storage TM

WHITE PAPER. Drobo TM Hybrid Storage TM WHITE PAPER Drobo TM Hybrid Storage TM Table of Contents Introduction...3 What is Hybrid Storage?...4 SSDs Enable Hybrid Storage...4 One Pool, Multiple Tiers...5 Fully Automated Tiering...5 Tiering Without

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

How it can benefit your enterprise. Dejan Kocic Hitachi Data Systems (HDS)

How it can benefit your enterprise. Dejan Kocic Hitachi Data Systems (HDS) PRESENTATION Case for flash TITLE GOES storage HERE How it can benefit your enterprise Dejan Kocic Hitachi Data Systems (HDS) SNIA Legal Notice The material contained in this tutorial is copyrighted by

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Data Center Storage Solutions

Data Center Storage Solutions Data Center Storage Solutions Enterprise software, appliance and hardware solutions you can trust When it comes to storage, most enterprises seek the same things: predictable performance, trusted reliability

More information

SLC vs MLC: Proper Flash Selection for SSDs in Industrial, Military and Avionic Applications. A TCS Space & Component Technology White Paper

SLC vs MLC: Proper Flash Selection for SSDs in Industrial, Military and Avionic Applications. A TCS Space & Component Technology White Paper SLC vs MLC: Proper Flash Selection for SSDs in Industrial, Military and Avionic Applications A TCS Space & Component Technology White Paper Introduction As with most storage technologies, NAND Flash vendors

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology Evaluation report prepared under contract with NetApp Introduction As flash storage options proliferate and become accepted in the enterprise,

More information

Benefits of Solid-State Storage

Benefits of Solid-State Storage This Dell technical white paper describes the different types of solid-state storage and the benefits of each. Jeff Armstrong Gary Kotzur Rahul Deshmukh Contents Introduction... 3 PCIe-SSS... 3 Differences

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

Speeding Up Cloud/Server Applications Using Flash Memory

Speeding Up Cloud/Server Applications Using Flash Memory Speeding Up Cloud/Server Applications Using Flash Memory Sudipta Sengupta Microsoft Research, Redmond, WA, USA Contains work that is joint with B. Debnath (Univ. of Minnesota) and J. Li (Microsoft Research,

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS

Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS M ost storage vendors now offer all-flash storage arrays, and many modern organizations recognize the need for these highperformance

More information

Solid State Drive Technology

Solid State Drive Technology Technical white paper Solid State Drive Technology Differences between SLC, MLC and TLC NAND Table of contents Executive summary... 2 SLC vs MLC vs TLC... 2 NAND cell technology... 2 Write amplification...

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

PrimaryIO Application Performance Acceleration Date: July 2015 Author: Tony Palmer, Senior Lab Analyst

PrimaryIO Application Performance Acceleration Date: July 2015 Author: Tony Palmer, Senior Lab Analyst ESG Lab Spotlight PrimaryIO Application Performance Acceleration Date: July 215 Author: Tony Palmer, Senior Lab Analyst Abstract: PrimaryIO Application Performance Acceleration (APA) is designed to provide

More information