BOOSTING RANDOM WRITE PERFORMANCE OF ENTERPRISE FLASH STORAGE SYSTEMS. A Thesis. Presented to the. Faculty of. San Diego State University

Size: px
Start display at page:

Download "BOOSTING RANDOM WRITE PERFORMANCE OF ENTERPRISE FLASH STORAGE SYSTEMS. A Thesis. Presented to the. Faculty of. San Diego State University"

Transcription

1 BOOSTING RANDOM WRITE PERFORMANCE OF ENTERPRISE FLASH STORAGE SYSTEMS A Thesis Presented to the Faculty of San Diego State University In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Janak R. Koshia Spring 2011

2

3 iii Copyright 2011 by Janak R. Koshia All Rights Reserved

4 iv DEDICATION I dedicate this to my father, whose blessings will always be with me, my mother whose strength and courage has always been a source of inspiration and my beloved fiancée without her I couldn t have done this.

5 v ABSTRACT OF THE THESIS Boosting Random Write Performance of Enterprise Flash Storage Systems by Janak R. Koshia Master of Science in Computer Science San Diego State University, 2011 NAND flash memory is playing a key role in the revolution of storage systems due to its desirable features such as fast random read and high energy-efficiency. It has been extensively applied in mobile devices like smart phones and PDAs. With increasing capacity, throughput and durability, NAND flash memory based solid state disk (hereafter, flash SSD) has started replacing hard disk drive (HDD) in laptops and desktop systems. Employing highend flash SSDs in server applications, however, is promising yet challenging. One of the challenges is that currently flash SSD cannot fully meet heavy random write requirements demanded by data-intensive enterprise applications like online transaction processing (OLTP) because of flash memory s inherent update/erasure mechanisms. In this thesis, to boost flash SSD random write performance, we develop a new cache management scheme called element-level parallel optimization (EPO), which buffers and reorders write requests so that element-level parallelism within the architecture of a flash SSD can be mostly utilized. Further, we evaluate the performance of the EPO scheme using a validated disk simulator with both synthetic benchmarks and real-world server-class traces. Experimental results demonstrate that EPO noticeably outperforms traditional least recently used (LRU) and a state-of-the-art flash write buffer management scheme block padding least recently used (BPLRU). Keywords: Flash memory, SSD, cache management, random write, storage system

6 vi TABLE OF CONTENTS PAGE ABSTRACT...v LIST OF TABLES... vii LIST OF FIGURES... viii ACKNOWLEDGEMENTS... ix CHAPTER 1 INTRODUCTION RELATED WORK AND MOTIVATON A SIMULATOR DISKSIM DESIGN AND IMPLEMENTATION PERFORMANCE EVALUATION CONCLUSION AND FUTURE WORK...35 BIBLIOGRAPHY...36

7 vii LIST OF TABLES PAGE Table 5.1. Statistics Real World Traces...25 Table 5.2. Simulation Parameters...27

8 viii LIST OF FIGURES PAGE Figure 1.1. Internals of hard disk drive....2 Figure 1.2. Performance gap CPUs vs. HDD based storage system...2 Figure 1.3. Solid state drive....4 Figure 1.4. Architectural system component of SSD....7 Figure 1.5. Internals of a flash memory die....8 Figure 2.1. Internal structure of Samsung SSD Figure 3.1. Diagram of storage subsystem topology Figure 4.1. (a) Request processing flow, (b) internal of write buffer B, (c) state of Q 1 in 4 different scenarios Figure 4.2. Algorithm of EPO scheme Figure 5.1. Performance impact of write buffer size on four schemes Figure 5.2. Performance impact of flash page size on four schemes Figure 5.3. Performance impact of number of elements on four schemes Figure 5.4. Performance impact of write buffer size on four schemes for synthetic benchmarks Figure 5.5. The mean response time and throughput of three different synthetic benchmarks for varying page size in SSD flash memory Figure 5.6. The mean response time and throughput of three different synthetic benchmarks for varying number of elements in an SSD....34

9 ix ACKNOWLEDGEMENTS I would like to thank my thesis advisor Professor Tao Xie for his consistent support and advice. I would also like to thank the other members of the committee for their time and effort. I take this opportunity to thank the other team members of our storage systems lab, Abhinav and Ramya for all their help. I also thank all those who have contributed to the DiskSim simulator and the SSD Extension to DiskSim.

10 1 CHAPTER 1 INTRODUCTION In this Internet world amount of data stored and served is in thousands of terabytes, and is increasing everyday. New services like social networking, new technologies like high definition videos, data availability and security, requirement of inexpensive, high capacity and high performance mass media storage is ever demanding. Currently hard disk drive (HDD), a non-volatile secondary storage media is a preferred media device to store this massive data. It is one of the fundamental components of all modern computers, coupled with a central processing unit (CPU, a processor) and implements the basic computer model. HDD is a non-volatile, random access device for digital data. It comprise of rigid rotating platters on a motor-driven spindle within a protective enclosure. Platter can store information on both sides. Magnetic surface on both sides is divided into small submicrometer-sized magnetic regions used to store data in binary, 0 or 1. HDD requires read/write head on each side of the platter to retrieve and store data. These heads are attached to actuator motor which positions the read/write head assembly across the spinning disk [1, 2]. Figure 1.1 [3] shows the internals of the HDD. Platter is divided logically into tracks and each track is further divided into sectors. Typically each sector holds 512 bytes of data and this is the smallest unit for read and write request from host. There are different interfaces available to connect HDD to host, e.g. SCSI, SATA, SAS, fiber channel, PCI-E, etc. Accessing a particular sector from HDD depends on the location of the sector and read/write head. When a request arrives to read or write a piece of data, read/write head has to seek for that particular sector. Time for the head to reach desired sector is called seek latency. Transfer rate of HDD depends on the seek latency, transfer rate is high if seek latency is low and transfer rate is low if seek latency is high. As seen from Figure 1, HDD is made mostly from mechanical parts, actuator motor, actuator arms, spindle motor, etc. Mechanical device has its own limitations. As seen from decades, number of transistors in CPU gets doubled every year or more and thus the speed of processor is increased dramatically. It is impossible accessing HDD directly to process the

11 2 Figure 1.1. Internals of hard disk drive. Source: Overington Pc Repairs. Hard Drive Recovery, d3.html, accessed Mar data, so DRAM is introduced which bridges the gap between processor speed and HDD access speed, up to some extent, still HDD remains bottleneck for the performance of overall system. Figure 1.2 [4] shows the performance gap between CPUs and HDD based storage system and is further increasing. Figure 1.2. Performance gap CPUs vs. HDD based storage system. Source: Tabor Communications, Inc. Back to the Future: Solid- State Storage in Cloud Computing, the-future-solid-state-storage-in-cloud- Computing html, accessed Aug

12 3 Disk capacity has been increasing at a rate of about 60% a year while disk access latency has been improving only be 10% per year [5]. To decrease the seek time and improve the access data rate, platter size has to be reduced and rotations of platters per seconds has to be increased, which requires more power and dissipates more heat. This makes HDD less energy efficient. Also HDD makes noise due to its mechanical parts and causes it to vibrate. Shock resistance is also an important factor for HDD, as heads or platter can be damaged and data can be lost due to any unwanted force applied to the device. Flash memory does not have all above mentioned drawbacks. Flash memory is a type of electronically erasable programmable read-only memory (EEPROM), memory chips that retain information without requiring power. It is gaining popularity in many electronic devices like mobile phones, digital cameras, MP3 players and it is also a good option for small storage device like USB drive. Flash memory devices use two different logical gates - NOR and NAND, to map data. The names refer to the type of logic gate used in each memory cell. In the internal circuit configuration of NOR Flash, the individual memory cells are connected in parallel, which enables the device to achieve random access. NOR flash can be fully addressed at byte level via an external bus for read operations, so it can retrieve as little as one byte. This configuration enables the short read times required for the random access of microprocessor instructions [6]. NOR flash shows rather slow write and erase times, because writing and erasing have to be performed block-wise. Also, NOR does not have any bad block management, this has to be taken care of by the host system. NOR is the ideal flash technology for non-volatile lower-density, high-speed read application, often used for code-storage such as consumer appliances, cell phones, PDAs and as BIOS in computer systems. Processor can read directly from NOR and no need to transfer data from NOR to RAM and can access data directly from RAM, thus NOR is used for small capacity rarely changing code. As an alternative to NOR flash memory, NAND flash was developed for high-density data storage. This can be achieved due to its smaller cell-size, which fits it into a small chip size and as a result lower cost-per-bit. This was achieved by creating an array of eight memory transistors connected in a series. Utilizing the NAND Flash architecture s high storage density and smaller cell size, NAND Flash systems enable faster write and erase by programming blocks of data [6]. NAND flash cannot be addressed byte by byte, but has to be read or written very much like hard drives; erasure only works block by block. Hence, a

13 4 controller is required to access NAND flash properly, which typically also takes care of bad block management. The lower cost, higher density and high-speed program/erase cycle per block, makes the NAND Flash Memory more suitable for enterprise data storage. As we have seen, flash memory stores information in an array of memory cells made from floating-gate transistors [7]. In traditional single-level cell (SLC) devices, each cell stores only one bit of information. Some newer flash memory, known as multi-level cell (MLC) devices, can store more than one bit per cell by choosing between multiple levels of electrical charge to apply to the floating gates of its cells. SLC devices provide lower density but faster write and erase operations and likelihood of error is reduced. It also allows more write/erase cycles than MLC. On the other hand MLC provide higher density and is cheaper than SLC. Enterprise uses both types of flash memory, SLC and MLC, depending upon the requirement. In Figure 1.3 [8] you can see the enterprise Solid State Drive (SSD) which uses NAND Flash Memory. Figure 1.3. Solid state drive. Source: Olin Coles. Toshiba T6UG1XBG Solid State Drive Controller, option=com_content&task=view&id=393&itemid=60, accessed Mar SSD is a data storage device that uses solid-state memory to store persistent data. In enterprise, for data storage NAND Flash Memory is used in the SSD. As name suggest it does not have any mechanical components, e.g. motor driven spinning platters, moving heads or motor based actuator assembly like HDD. SSD provides more reliability by not having these mechanical parts and thus consumes much less energy than HDD [9, 10, 11, 12]. SSD

14 5 also provides faster access than the HDD by eliminating seek latency and rotational latency [13, 14]. Its physical properties make it vibration-tolerance and shock-resistance [15, 16]. However currently enterprise is facing challenges with some aspects of SSD [11, 13, 15, 17, 18, 19]. Major issues SSD is facing are out-of-place updating [16], time consuming erase and garbage operations [13], complicated implementation of Flat Translation Layer (FTL) [11], wear-leveling and write endurance [20] of the SSD. In NAND Flash Memory based SSD, the read/write operation is performed at finer granularity than the erase operation. The property of the flash memory does not allow a memory cell to be reprogrammed before it is erased to its default bit value. In new fresh SSD, all the memory cells are set to 1, it can be modified and set to 0 but it cannot be reprogrammed and set it back to 1. Memory cells of flash memory have to be erased and set it to its default value before they can be reprogrammed, this is called erase operation. This erase operation is very expensive and thus erase operations are performed at coarse granularity than the read/write operations. Read/Write operations are performed at finer granularity. When update is made to already programmed (written) data then the new updated data has to be written at another already erased location marking old data invalid, this is called out-of-place update. This out-of-place update property of flash memory (SSD) raises many other issues like, physical location of data is constantly changing on update, thus SSD has to constantly map logical address to its physical location. Write endurance flash memory allows limited number of erase operations before it gets worn out and become unreliable. SLC NAND flash technology allows 100K erase operations while MLC NAND flash technology allows 30K for consumer applications and 5-10K for enterprise applications. Wear leveling is the technique for lengthening the life span of the SSD. The idea is to distribute the workload, mainly write operation across the SSD, so at any instance number of erase operation performed on each block is almost equal. Power failure is also a major issue that enterprise is currently facing for using SSD. Due to out-of-place update in flash memory, SSD has to constantly maintain a table for logical address to physical address of smallest read/write unit, which creates a very large table. For example, smallest unit for read/write request is 4K, size of SSD is 400GB and if 32-bits are used to point the location, then it creates a table of 400MB which is a huge table and have to be kept in memory for frequent updates. Slow media transfer rate in SSD is also a significant problem in case of power failure. The whole mapping table has to be stored back to flash

15 6 memory in order to get back the table while restoring the system. There is also a small buffer to hold the data before it gets flushed to flash memory. Properly storing this data to flash memory in case of power failure is also very important. Storing whole indirection table and data in buffer, to flash memory in case of power failure is a great challenge towards data storage industry. The overview of SSD internals is shown in Figure 1.4 [21]. Data goes through different components of SSD before it reaches to flash memory. SSD consist different components, mainly controller, flash chips (memory), DRAM and Host Interface Logic unit. SSD controller provides an interface to the host, and firmware for SSD to execute. It also contains embedded processor, ROM, RAM, error correction code (ECC), wear leveling and other features like components to act in power failure. Buffer is a high speed RAM memory component bridging the speed difference between high speed host and slow data transfer rate to flash memory, and helps to increase the over all throughput. Flash Memory Components are individual flash memory chip which actually stores the data. Each flash memory chip is divided into dies, each die is divided into planes, planes are further divided into blocks and blocks are divided into pages. Figure 1.5 [9] shows the internals of a flash memory die. This shows how the physical media is divided into planes, blocks and pages [9]. Size of the page varies from manufacturer to manufacturer but it is generally of 1K, 2K or 4K of size. Typically at enterprise level, page size is 4K. Number of pages per block are also varies, normally each block contains 64 or 128 pages. One page is the smallest unit for read/write request while smallest unit for erase operation is one block. One of the most critical components of SSD is Addressing, mapping from logical address space to physical address space. In order to replace SSD in place of HDD in enterprise without making major changes in the existing system and hierarchy in the host software stack, SSD should provide similar interface as HDD provides such that from host using Solid State Drive is same as using Hard Disk Drive. Physically one page in SSD is a collection of multiple sectors. In order to make this happen SSD requires a very critical component FTL. FTL takes care of mapping logical address space to physical address space. This mapping table is also called Indirection Table. In SSD once the data is written it cannot be modified at the same place, it has to be written to another empty page, this means physical

16 7 Figure 1.4. Architectural system component of SSD. Source: Thomas M. Rent. SSD Architecture, d_architecture, accessed Aug location of the data is keep changing on update. FTL has to keep track of the actual physical location of any given logical address. When a request arrives to update the data, FTL works in steps: (1) Find an empty page, (2) Write updated data to new page, (3) Update the indirection table and (4) Invalidate old data. SSD keeps pool of erased blocks which can be made available to FTL while looking for writing new data. A garbage collector is activated when number of available erased blocks in the pool goes below the threshold level. Garbage collector searches blocks having invalid pages, it moves valid pages from that partial valid block to new erased block and then erases the old invalid block and put it into the free

17 8 Figure 1.5. Internals of a flash memory die. Source: N. Agrawal, V. Prabhakaran, T. Wobber, J. Davis, M. Manasse, and R. Panigrahy. Design tradeoffs for SSD performance. In Proc. USENIX Annual Technical Conference, pages 57-70, available pool of blocks. Garbage collector also affects the entries in the indirection table as it moves valid pages from partial valid block to another valid block. This process amplifies the number of read/write operations then SSD actually receives from host. This says if heavy random write requests arrive from host then SSD has to write pages to new free locations and erase more invalid blocks, this actually increases the number of write requests. Poor random write performance is the biggest challenge industry is facing right now. Random write workload with no spatial or temporal locality brings down the overall performance of the SSD. One of the researchers in his research in random write performance of SSD says, if only pure 4K random read workload is applied to SSD then it is 20x faster than the HDD but if pure 4K random write workload is applied to a SSD then same SSD is 15x slower than the HDD. He conducted his research on SanDisk SATA 5000 SSD and on Seagate 15K

18 9 Revolutions per Minute (RPM) SAS HDD [22]. Reading a page from flash to 4K register on the corresponding plane takes 25 microseconds while writing a page on media takes 200 microseconds and on the worse side, erase operation operates on block granularity takes 1.5 milliseconds [9]. This shows SSD delivers very poor random write performs and new techniques has to be developed to improve behavior of SSD towards random write workload to make it suitable for enterprise. There have been very few research conducted addressing the issue of random write performance in SSD. There are different areas and portions touched by researchers to enhance SSD performance for random write workload. They can be categorized into: (1) Adding non-volatile RAM (NVRAM) buffer [23], (2) Enhance FTL engine [24], and (3) Introducing cache to buffer and reorder the write requests. Later category has advantage over first two categories. Idea in adding non-volatile RAM is expensive and requires RAM in GB while in category three it requires less than 128MB. Proposing a cache does not require modifying the FTL and can be used with any existing FTL engine. Thus, our research work resides in third category and tries to improve the SSD performance for random write workload. Our element-level parallel optimization (EPO) scheme takes different approach to access independent elements (packages) in parallel. We propose a cache between the FTL and Flash Memory. Unlike Block Padding Least Recently Used (BPLRU) [25] whose objective is to use the temporal and spatial locality of the workload and reduce the number of dirty pages flushing from buffer to flash memory, we utilize the nature of SSD to access the independent elements in parallel. In this scheme, number of independent requests can be programmed simultaneously and in the same garbage collector can perform erase operation in parallel. We have conducted this research with extensive simulations using SSD model developed by Microsoft [7] on top of DiskSim 4.0 simulator [26]. My research presents the result for collection of read world OnLine Transaction Processing (OLTP) application traces. This thesis is organized into following chapters. Chapter 2 discusses related work and motivation, Chapter 3 describes the DiskSim 4.0 and SSD add-on by Microsoft simulators, Chapter 4 explains the design and implementation of EPO scheme, Chapter 5 shows the experimental results. Finally, a conclusion and a brief discussion of future work are discussed in Chapter 6.

19 10 CHAPTER 2 RELATED WORK AND MOTIVATON In effort to make SSD as a primary storage, NAND Flash Memory is packaged in hard drive form factor and designed to make an alternative to conventional hard drives. Performance of the SSD depends mainly on two factors: (1) Architecture of SSD, and (2) Type of workload. In this thesis, we are focusing on improving the behavior of SSD towards random writes. In such case choosing a SSD which has sound internal architecture that has potential to work well with random write workload is very critical. Samsung s K9XXG08UXM series NAND-Flash part [9] has good internal architecture and we are conducting our research based on this architecture. In rest of thesis we are explaining all our work based on Samsung s SSD architecture. Figure 2.1 shows the internal structure of the Samsung 4GB SSD, organized with four elements (packages). Figure 2.1. Internal structure of Samsung SSD. Samsung SSD architecture is organized into multiple identical elements (also known as packages). Each element contains two dies. These two dies share a Data Bus and a Control Bus, dies are serially connected. Each element in the SSD has its own data line connecting to controller but shares the control line. Having independent data line for each element, these

20 11 elements can be accessed in parallel and more data transfer rate can be achieved. Two dies in each element are serially connected which allows interleaving between read, write and erase operations. Granularity of read/write and erase operations is different while time required to perform each operation is different. Thus, these operations can be interleaved between two dies through serial connection and can fully utilize the architecture of SSD. In this thesis, we are focusing on write operations only. Each die in the element is further divided into four planes, each plane is divided into blocks and blocks are further divided into pages. Page is the smallest unit and is of size 4KB. Read and Write requests can operate on as small as 4KB of data. Each block contains 64 pages, thus size of each block is 256KB which is the smallest unit for erase operation. Each plane contains 2048 blocks. Blocks are distributed on pair of planes, from blocks 0 to 4095, even blocks reside on Plane 0 while odd blocks reside on Plane 1. Same way blocks 4096 to 8191 blocks are distributed between Planes 2 and 3. Such distribution also helps in interleaving between the read, write and erase operations. Therefore each die in an element is of 2GB which makes each element 4GB of size. We are trying to utilize the independent element level architecture of the SSD and trying accessing them in parallel. In order to use SSD in place of HDD and make it transparent to host system, FTL is implemented in the controller. We have explained and discussed the functionality and drawback of the FTL. There have been very few research conducted on area of making SSD suitable for enterprise level random write workload [11, 23, 25]. Leventhal suggested adding non-volatile RAM (NVRAM) as a battery backed DRAM as a cache [23]. Writes are committed to NVRAM ring buffer and immediately acknowledged to the host. Data in the NVRAM is then asynchronously written out to the drive. Once the data is committed to the drive, corresponding record can be dropped out from the NVRAM. This tremendously improves the response time and the over all system performance but also has few drawbacks. Required size of NVRAM is between 2GB to 4GB and it is very costly. In industry 4GB of NVRAM is small enough to get filled very quickly before data get flushed to drive. Another approach taken by Gupta et al. [11] is to modify the flash translation layer. He proposed Demand based Flash Translation Layer (DFTL), according to him rather doing mapping at page-level or block-level, a hybrid of page-level and block-level mapping should be used. This would save SRAM required to hold the mapping table and improve the overall performance. This

21 12 approach does not require any extra hardware but require modifying the existing FTL, which would cost high to integrate in existing SSDs. Kim and Ahn proposed embedding a small size RAM (e.g. 1 16MB) write buffer as cache. They proposed a new technique called BPLRU which is an improvement over wellknow Least Recently Used (LRU) scheme and made it suitable for SSD [25]. They proposed to add a write buffer before the FTL so controller does not know where these write requests are actually physically located on SSD. Page padding technique used by BPLRU amplifies the total number of write requests and also increase the overhead of reading the pages belonging to same block. Block moves to head of the buffer if a page from that block is requested for write, so all pages belong to same block are also moved to head of the buffer even though they are not accessed recently. The block whose anyone of the pages is not accessed recently is flushed to the flash memory. When that block is moved to flash memory, the pages which are not in the buffer and belong to victim block are read from flash memory and then whole logically consecutively arranged block is flushed to flash memory. They claim that overhead created due to extra read and write is worthwhile. But this assumption is only true if logically consecutive pages are also stored within the same physical location in flash which cannot be the case in modern SSDs. Industry is still facing performance related issues with random write workload, especially in database systems. The unique internal structure of Samsung SSD architecture and its three different level of parallelism show good potential for improving performance (Figure 2.1). The three levels of parallelism are: (1) Element level concurrency, (2) Die level parallelism, and (3) Plane-level interleaving. As seen in Figure 2.1, all elements share common control bus but each element has its own data bus to the controller, so we can access these elements independently and concurrently. This element level parallelism can largely improve the performance of the SSD. This can also be called ganging of elements [9] or system-level organization based concurrency because of its similar to the RAID organization in HDD storage system. Within an element multiple dies can serve read/write requests simultaneously but here dies within one element are connected serially, the speed of parallelism is constrained by the speed of the serial bus. The most internal level of parallelism can be achieved at plane level. Two fast internal Copy-back operations between two different planes can be interleaved.

22 13 Continuous improvement in the architecture of the SSD and system-level organization shows multi-level concurrency in SSDs, which inspires me to develop a parallelism scheme to utilize this concurrency and improve the performance of the SSD for random write workload. The EPO scheme reorders the workload which actually goes to flash memory and utilizing the nature of accessing SSD elements in parallel.

23 14 CHAPTER 3 A SIMULATOR DISKSIM DiskSim is an efficient, accurate and highly-configurable disk system simulator developed to support research into various aspects of storage subsystem architecture [27]. DiskSim includes different modules to simulate HDD accurately. It also has different modules for many secondary system components like controllers, buses, device drivers, adapters and disk drives. Disk drive module in DiskSim simulates modern disk drives in great details. As DiskSim simulates almost all system level components, it gives accurate results near to the real world experience. DiskSim has been validated as part of the comprehensive system-level model and also as a standalone subsystem. It is been validated against five different disk drives from three different manufacturers. DiskSim can be driven by externally-provided I/O request traces or internallygenerated synthetic workloads. It also supports different trace formats and new formats can also be easily added. DiskSim is written in C and requires no special system software. It is a command line tool. DiskSim requires five command line arguments. It also accepts more than five arguments to override some of the parameters but they are optional. The command is as follows: disksim <parfile> <outfile> <tracetype> <tracefile> <synthgen> [ par override [... disksim is the name of the executable file and it can be found under src directory. parfile is a parameter file where different configurations can be made for DiskSim. There are three basic components of the parameter file: blocks delimited by {}, instantiations and topology specifications. Each block contains names and its associated values. Names are in the string form and values can be integer, float, strings, blocks and lists delimited by []. The parameter file can be organized in flexible way but there are few requirements. Components cannot be referenced before they are defined. Every parameter file must define the Global and Stats block. The Global block contains parameters which are going to be used through out the simulations. The Stat blocks contains the series of boolean parameters which give

24 users a choice to report the statistics or not. Also, parameter file must define Proc and Synthio blocks if synthetic workload is used in the simulation. The usual parameter file contains drivers, buses, controllers, driers and storage subsystems. All the components must be defined with configurable parameters and then they are interconnected can be defined in topology. topology disksim_iodriver driver0 [ disksim_bus bus0 [ disksim_ctlr ctlr0 [ disksim_bus bus1 [ disksim_ctlr ctlr1 [ disksim_bus bus2 [ disksim_disk disk0 [], disksim_disk disk1 [] ] # end of bus2 ] # end of ctlr1 ], #end of bus1 disksim_bus bus2 [ disksim_ctlr ctlr2 [ disksim_bus bus3 [ disksim_disk disk2 [], disksim_disk disk3 [] ] # end of bus3 ] # end of ctlr2 ] # end of bus2 ] # end of ctlr0 ] # end of bus0 ] # end of driver 0 An example topology is provided in the following code. The diagram of the storage system corresponding to this topology is shown in Figure 3.1. Different types of storage devices can be used in simulation. Each device has its own properties and configurations. The disk specifications can be configured in.diskspecs file and this file needs to be sourced in parameter file before it is instantiated or referenced. All input component specifications must be instantiated and give names before they can be incorporated into a simulated storage system. Component can be instantiated in the following form: instantiate <name list> as <instance name> where <instance name> is name given to the component specification and <name list> is a list of names for the instantiated devices. e.g. instantiate [ ctlr0 ] as CTRL0 creates a 15

25 16 Figure 3.1. Diagram of storage subsystem topology. controller named ctrl0 using the CTRL0 specification. More than one instance of any component can be created. e.g. instantiate [disk0, disk2 disk5] as HP_C2249A creates five instances named disk0, disk2, disk3, disk4 and disk5 using the HP_C2249A specifications. Multiple disks can be arranged in RAID architecture. This can be done in disksim_iosim {} block. outfile is an output file where simulation result is stored. If stdout is provided for outfile then the result is displayed on the screen. Output file contains large number of statistics about the simulated storage components collected by DiskSim. First part of the output file contains the configuration parameter specified in the parameter file, this data is simply copied from parameter file to the output file. The remainder of the output file contains the aggregate statistics of the simulation run, including both the characteristics of the simulated workload and the performance indicators of the various storage components. The size of the output file can be reduced by configuring DiskSim not to report certain sets of statistics. tracetype specifies the format of trace used. DiskSim supports various trace formats and new format can be added. The trace formats supported by DiskSim are ASCII, validation

26 trace format ( validate ) and raw format ( raw ). The default trace format is ASCII. The ascii trace file contains five parameters describing a single disk request. The five parameters are: 1. Request arrival time: The unit of arrival time is millisecond and should be provided in float value. The arrival time is relative to the start of the simulation (time 0.0). Arrival time of each request must be less then its following request. 2. Device number: This specifies request goes to which device. The unit of device number is integer. 3. Block number: This is a Logical Block Address (LBA) specifying the address of a sector. It must be specified in integer. 4. Request size: Integer specifying the size of the request in device block. This provides the information of how many sectors are requested from host. 5. Request flag: Its value is 0 or 1. 0 specifies write request and 1 specifies the read request. synthgen flag specifies whether synthetically generated workload is used or any realworld workload is used for the simulation. One indicates that synthetic workload is used while 0 specifies that trace will be fed from the user. DiskSim is a HDD simulator, to simulate SSD an extension developed by Microsoft Research [28] is used in my experiments. This extension is not developed for any specific SSD. This is a simulator for an idealized SSD that is parameterized by the properties of NAND flash chips such as read, write, erase latency, number of chips, connectivity, and chip bandwidth [28]. The purpose of this extension is to mimic SSD as HDD by tuning the parameters. The SSD can be instantiated in the same way HDD is instantiated. The parameters for the SSD are incorporated in the.parv file. Property of the SSD can be defined using these parameters. User can specify number of elements in the flash chip, number of planes per die, number of blocks per plane and number of pages in a block. User can also specify the size of a page, page read latency, page write latency and block erase latency. This extension provides a very good environment to simulate the SSD and is very easy to use. 17

27 18 CHAPTER 4 DESIGN AND IMPLEMENTATION In this chapter, we present EPO scheme exploiting element-level parallelism in SSD to improve its performance towards random write workload. The basic idea and design is illustrated using an example, which is followed by detailed description of the algorithm implementation (see Figure 4.1). Figure 4.1. (a) Request processing flow, (b) internal of write buffer B, (c) state of Q 1 in 4 different scenarios. Figure 4.1 shows how requests flow in the SSD firmware and how requests go through buffer RAM before they go to flash memory and how they are rearranged to minimize the number of write and erase operations of SSD. We are only focusing on write requests, for read requests FTL looks into write buffer B, if FTL finds cache hit, it returns data from write buffer B and if cache miss happens, FTL fetches data from flash memory. Each write request comes to SSD first processed by the pre-processor. The granularity for write request is one page (4KB in our case) so if write request arrives requesting smaller size than 4KB or not aligned to 4KB then pre-processor manipulate the size of the request and makes it aligned to 4KB. In our case SSD writes 4KB of data at a time so if write request comes from host to update one sector (512 bytes), FTL has to read the whole page (8 sectors)

28 19 containing the requested sector, modify the requested sector then write the whole page back to the flash memory. This is also called Read-Modify-Write. For example, if request from host arrives to modify sector 13 then FTL reads page two (sectors 8 to sectors 15) modifies sector 13 and writes whole page two (sectors 8 to 15) back to flash memory. In my thesis, we assumed size of each write request is one page size. The task of pre-processor is to make requested LBA aligned to number of sectors contained in one page, eight sectors in our case and also making its size equal to a page, 4KB in our case. Pre-processor sends the request to FTL where its logical address is mapped to physical address. FTL first look into the write buffer before it actually allocates a new physical location to the request. If FTL finds that request is already present in the buffer B then it leaves the physical location of that request unchanged and forwards it to write buffer B where the request will be consumed by buffer itself. If FTL does not find the request in write buffer B, then it maps its logical address to a new physical location on flash and caches it into the write buffer B. Write buffer B is a DRAM embedded on SSD. There are different approaches to prevent the data loss in case of power failure. One simple approach is to have on board battery or more sophisticated way, SSD has capacitors on it which helps saving Indirection Table, data in write buffer B and other information on flash memory. There are other dedicated software modules are running in the background which periodically saves all of the above mentioned data into the flash, so in power failure there might be a good chance that less uncommitted data available to store on flash. Finally, EPO manages the requests in the write buffer and rearranges the requests before they go to flash memory. Write buffer B is divided into blocks whose size is equal to the size of the page in SSD, 4KB in my experiments. Requests in write buffer B contains their logical address and physical address as their metadata. Each request in write buffer B is of size one SSD page. There is a very good reason to do this; the consecutive pages are distributed across the elements so they can be access simultaneously. It also helps in load balancing and preventing one element overloaded with requests. This also helps in wear-leveling of an element and makes the working life of each element almost at same level. EPO maintains a pool of free blocks and number of queues depending upon the number of elements in the SSD. For example, let s assume SSD has 4 elements so EPO maintains 4 different queues, one for each element, see Figure 4.1b. Each queue is a linked-

29 20 list of nodes, each node is one block corresponding to its respective element. Requests visiting an element are queued into its corresponding queue. Figure 4.1b shows how EPO manages pool of free blocks and four queues. When a new request arrives, EPO acquires a block from free block pool to accommodate the new request and places it into its corresponding queue based on its physical address. The free block pool shrinks as new write requests arrive to buffer B and eventually it gets empty when all the free blocks are allocated to new requests. When the free block pool gets empty and a new request arrives then a victim has to be chosen from already allocated blocks. Each time when a victim needs to be chosen for a new request, EPO selects node from tail of the each queue and victims are flushed to the flash memory. One of the freed blocks is used for the new request and rests are reclaimed by the free block pool. The performance of the algorithm depends on how queues are managed and how the victims are selected. Assume there are six requests (e.g. 1, 2, 3, 2, 4, 3 arrives in same sequence) as shown in Figure 4.1c are single page requests and they all target the same element, element 1. When requests 1, 2 and 3 arrive, three blocks are taken from the free block pool, one block is allocated to each request and they are put into the Q 1. Request coming to an element is put on the head of the queue, so in this case, request one resides at the tail of the queue while request 3 is at the head of the queue Q 1 (see scenario one, Figure 4.1c). Now as shown in scenario two, if request 2 arrives after 3, it is just modified in the write buffer and put it on the head of the queue. Neither free block is needed from the pool nor a logical to physical mapping required. At this instance request 2 is at the head of the queue Q 1 (scenario two). This way EPO also exploits any possible temporal locality. Now a new request 4 arrives, a free block is taken from the free block pool and allocated to request 4, it is simply put on the head of the queue (scenario three). Now a request for page three arrives, block 3 is already available in the buffer B, so no need to take a free block from pool. Data in block 3 is updated and block 3 is shifted to the head of the queue, see scenario four. This process is similar for all requests for any element and they all are put into their respective queue in write buffer. This process is continued till pool of free block gets empty. Now a new request arrives for the same element (element one, in above example), EPO first look for a block from free pool but it is already empty, so EPO has to select a victim from write buffer B and flush it to flash memory. EPO selects a block from the tail of queue Q 1 and flush it to flash

30 21 memory. This way we utilize the mechanism of LRU method. One more change we made in selecting victim block is, EPO selects a victim block from tail of each queue and flushes it to flash memory. In the above example, EPO select a victim block from tail of each queue, from Q 1 to Q 4, all four requests are simultaneously flushed to the flash memory. So SSD writes all four requests in the same time it writes one request. This way we can access all elements in parallel and save the over all write time for SSD. Another good side effect we got is, the numbers of free blocks now available are equal to the number of queues (elements) in write buffer B. In our case four blocks gets freed and are put into the free block pool. This way more requests can be accommodated without going into the process of selecting any new victim. Now we will describe how the algorithm works. Figure 4.2 outlines the algorithm of EPO scheme. Note that input P of the EPO algorithm is the output of the FTL layer (Figure 4.1a). Output of the FTL is unique write request which is not present in the write buffer B and, address of each request is page aligned (8 in our case) and size of each request is also one page (4KB in our case). All requests coming out of the FTL is also mapped to physical address in flash memory. Another input is an empty write buffer B managed by EPO. The write buffer B is implemented as a DRAM cache inside a SSD and its size is configurable and it varies from 4MB to 32MB in my experiments. The output of the EPO scheme is a reshaped and rearranged single-page write request set R, which is actually goes to the flash memory. The most important point is the output R of algorithm which can mostly exploit the element-level concurrency in flash memory. We will describe each step in Figure 4.2 in detail. First EPO creates e empty queues from Q 1 to Q e in B, where e is number of elements in SSD. Each queue will be used to maintain requests that target on a particular element. Next, EPO processes each request in P within the outmost loop (step 2~29) in the same manner. EPO first checks whether a request in P is a multiple-page request or a single-page request (step 5). If request is a multiple-page request then EPO chops it down into multiple single page requests (step 6). Consequently, only single-page request will be maintained in each queue, which in turn increases the element-level concurrency. As we mentioned, EPO maintains logical address and physical address of each request in buffer B as their metadata.

31 22 Figure 4.2. Algorithm of EPO scheme. For each newly arrive single-page request, EPO searches the write buffer for matching physical address in its corresponding queue. If EPO finds the same request then it updates the data and returns to serve the new request. Further, this updated block is moved to the head of the queue so the temporal locality can be utilized (step 15). If request with same physical address is not found, then EPO checks for availability of a block in free block pool to accommodate the new request (Figure 4.1b). EPO allocates a block from free block pool to

32 23 newly arrived request if it does not find a request with same physical address in write buffer B. When free block pool is empty then EPO has to select e victims each from one queue (steps 18~21). Each request block from tail of each queue is chosen as a victim and its arrival time is update to the arrival time of the new request (step 19~20). All victims are then inserted into R, which contains requests to the SSD. While the original request set P exhibits a random write pattern, the reshaped and rearranged request set R becomes a batch-based request stream. Requests that are evicted from B at the same time are bundled together to form a batch. Requests in one batch go to different elements of an SSD and they share the same arrival time. In this manner all elements can serve requests in one batch in parallel.

33 24 CHAPTER 5 PERFORMANCE EVALUATION In this chapter, we present the experimental results of my algorithm with variety of configurations including write buffer size, flash page size and number of elements in a SSD. My primary focus in this experiment is how much performance gain my EPO scheme gets in response time and throughput. We have also measured response time and throughput of four different algorithms; EPO, LRU, BPLRU [25] and NoCache. To conduct these experiments we used three real world system traces Financial1 [29], Financial2 [29] and TPC-C [30]. We used these traces in my simulation study to evaluate the performance of EPO and other three algorithms NoCache, LRU and BPLRU. We also used synthetically generated workload to further evaluate my algorithm. These synthetically generated workloads have 100% random write requests. The main purpose of this synthetically generated workload is to see how EPO performs in pure random write workload environment. In this chapter we will describe the experimental settings, impact of write buffer size, page size and number of elements on the performance of a SSD, nature of traces used and how EPO out performs other three algorithms. All simulation experiments are conducted in three independent stages sequentially: pre-processing, reshaping and feeding. In pre-processing stage pro-processor (Figure 4.1a) performs four different tasks: (1) Real world traces we used have both read and write requests. In this experiment we are only focusing on write requests and we do not need read requests. So the first task of pre-processor is to get rid of all read requests from the traces. (2) These original traces are very long and contain millions of traces which require quite a long time to simulate the results. So we decided to use first few millions of write requests in my experiment. Second task of the pre-processor is to truncate these original workloads and making it suitable for my experiments. (3) In TPC-C trace, the LBA is range is too high which is not a good fit to SSD configuration we used. Pre-processor evenly shrinks the trace s logical address space so that logical address of each write request can be physically mapped on SSD. Logical address space in Financial1 and Financial2 traces are suitable for

34 25 my experiments so pre-processor does not need to do any truncation in logical address space for these traces. (4) As we discussed in Chapter 4, SSD extension for DiskSim is mimicking the SSD as HDD and combines number of sectors in page. Thus, pre-processor needs to make each request size multiple of the page size, in my experiments, the request size must be in multiple of eight (sector, i.e. 4KB). Now output of this pre-processor is suitable to SSD storage system. In reshaping stage, traces go through different buffer management schemes. Traces went through these algorithms are reshaped according to their management policies and then they are fed to the SSD. We implemented three different algorithms other than my own scheme EPO. We implemented and ran all four different buffer management schemes NoCache, LRU, BPLRU and EPO on a Dell PowerEdge 1900 server with two Quad Core Intel E GHz processor and 8GB FB-DIMM memory. After FTL mapped logical address of each request to physical address, they are then buffered into the write buffer B (Figure 4.1b) and by managed by individual scheme like EPO. The output of the write buffer is a rearranged request set, which contains all write requests evicted from the buffer according to the victim selection policy of a buffer management scheme. In the final stage, these rearranged request set is fed to SSD extension to DiskSim simulator. We evaluate the four buffer management scheme by running simulation over three real world traces: Financial1 [29], Financial2 [29] and TPC-C [30], which has been widely used in researches conducted by different authors. The statistics of the real world traces are listed in Table 5.1. Table 5.1. Statistics Real World Traces Workloads Financial1 Financial2 TPC-C Number of writes 2,000, ,000 2,000,000 Mean write size (KB) Write per second Write size range (KB) Financial1 and Financial are taken from OLTP [31] applications running at two large financial institutions [29]. Financial1 is a write dominant trace which contains more than 60% write requests while Financial2 is a read dominant trace which contains more than

Solid State Drive (SSD) FAQ

Solid State Drive (SSD) FAQ Solid State Drive (SSD) FAQ Santosh Kumar Rajesh Vijayaraghavan O c t o b e r 2 0 1 1 List of Questions Why SSD? Why Dell SSD? What are the types of SSDs? What are the best Use cases & applications for

More information

Solid State Drive Architecture

Solid State Drive Architecture Solid State Drive Architecture A comparison and evaluation of data storage mediums Tyler Thierolf Justin Uriarte Outline Introduction Storage Device as Limiting Factor Terminology Internals Interface Architecture

More information

An Overview of Flash Storage for Databases

An Overview of Flash Storage for Databases An Overview of Flash Storage for Databases Vadim Tkachenko Morgan Tocker http://percona.com MySQL CE Apr 2010 -2- Introduction Vadim Tkachenko Percona Inc, CTO and Lead of Development Morgan Tocker Percona

More information

Implementation of Buffer Cache Simulator for Hybrid Main Memory and Flash Memory Storages

Implementation of Buffer Cache Simulator for Hybrid Main Memory and Flash Memory Storages Implementation of Buffer Cache Simulator for Hybrid Main Memory and Flash Memory Storages Soohyun Yang and Yeonseung Ryu Department of Computer Engineering, Myongji University Yongin, Gyeonggi-do, Korea

More information

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

Indexing on Solid State Drives based on Flash Memory

Indexing on Solid State Drives based on Flash Memory Indexing on Solid State Drives based on Flash Memory Florian Keusch MASTER S THESIS Systems Group Department of Computer Science ETH Zurich http://www.systems.ethz.ch/ September 2008 - March 2009 Supervised

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...

More information

Price/performance Modern Memory Hierarchy

Price/performance Modern Memory Hierarchy Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

An Exploration of Hybrid Hard Disk Designs Using an Extensible Simulator

An Exploration of Hybrid Hard Disk Designs Using an Extensible Simulator An Exploration of Hybrid Hard Disk Designs Using an Extensible Simulator Pavan Konanki Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment

More information

Technologies Supporting Evolution of SSDs

Technologies Supporting Evolution of SSDs Technologies Supporting Evolution of SSDs By TSUCHIYA Kenji Notebook PCs equipped with solid-state drives (SSDs), featuring shock and vibration durability due to the lack of moving parts, appeared on the

More information

Nasir Memon Polytechnic Institute of NYU

Nasir Memon Polytechnic Institute of NYU Nasir Memon Polytechnic Institute of NYU SSD Drive Technology Overview SSD Drive Components NAND FLASH Microcontroller SSD Drive Forensics Challenges Overview SSD s are fairly new to the market Whereas

More information

CAVE: Channel-Aware Buffer Management Scheme for Solid State Disk

CAVE: Channel-Aware Buffer Management Scheme for Solid State Disk CAVE: Channel-Aware Buffer Management Scheme for Solid State Disk Sung Kyu Park, Youngwoo Park, Gyudong Shim, and Kyu Ho Park Korea Advanced Institute of Science and Technology (KAIST) 305-701, Guseong-dong,

More information

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2 Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...

More information

Accelerating Server Storage Performance on Lenovo ThinkServer

Accelerating Server Storage Performance on Lenovo ThinkServer Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER

More information

HP Z Turbo Drive PCIe SSD

HP Z Turbo Drive PCIe SSD Performance Evaluation of HP Z Turbo Drive PCIe SSD Powered by Samsung XP941 technology Evaluation Conducted Independently by: Hamid Taghavi Senior Technical Consultant June 2014 Sponsored by: P a g e

More information

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper : Moving Storage to The Memory Bus A Technical Whitepaper By Stephen Foskett April 2014 2 Introduction In the quest to eliminate bottlenecks and improve system performance, the state of the art has continually

More information

In-Block Level Redundancy Management for Flash Storage System

In-Block Level Redundancy Management for Flash Storage System , pp.309-318 http://dx.doi.org/10.14257/ijmue.2015.10.9.32 In-Block Level Redundancy Management for Flash Storage System Seung-Ho Lim Division of Computer and Electronic Systems Engineering Hankuk University

More information

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect High-Performance SSD-Based RAID Storage Madhukar Gunjan Chakhaiyar Product Test Architect 1 Agenda HDD based RAID Performance-HDD based RAID Storage Dynamics driving to SSD based RAID Storage Evolution

More information

NAND Flash FAQ. Eureka Technology. apn5_87. NAND Flash FAQ

NAND Flash FAQ. Eureka Technology. apn5_87. NAND Flash FAQ What is NAND Flash? What is the major difference between NAND Flash and other Memory? Structural differences between NAND Flash and NOR Flash What does NAND Flash controller do? How to send command to

More information

Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays

Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays 1 Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays Farzaneh Rajaei Salmasi Hossein Asadi Majid GhasemiGol rajaei@ce.sharif.edu asadi@sharif.edu ghasemigol@ce.sharif.edu

More information

The Technologies & Architectures. President, Demartek

The Technologies & Architectures. President, Demartek Deep Dive on Solid State t Storage The Technologies & Architectures Dennis Martin Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers,

More information

1 / 25. CS 137: File Systems. Persistent Solid-State Storage

1 / 25. CS 137: File Systems. Persistent Solid-State Storage 1 / 25 CS 137: File Systems Persistent Solid-State Storage Technology Change is Coming Introduction Disks are cheaper than any solid-state memory Likely to be true for many years But SSDs are now cheap

More information

SOLID STATE DRIVES AND PARALLEL STORAGE

SOLID STATE DRIVES AND PARALLEL STORAGE SOLID STATE DRIVES AND PARALLEL STORAGE White paper JANUARY 2013 1.888.PANASAS www.panasas.com Overview Solid State Drives (SSDs) have been touted for some time as a disruptive technology in the storage

More information

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (http://www.cs.princeton.edu/courses/cos318/)

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (http://www.cs.princeton.edu/courses/cos318/) COS 318: Operating Systems Storage Devices Kai Li Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Today s Topics Magnetic disks Magnetic disk performance

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information

Sistemas Operativos: Input/Output Disks

Sistemas Operativos: Input/Output Disks Sistemas Operativos: Input/Output Disks Pedro F. Souto (pfs@fe.up.pt) April 28, 2012 Topics Magnetic Disks RAID Solid State Disks Topics Magnetic Disks RAID Solid State Disks Magnetic Disk Construction

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

With respect to the way of data access we can classify memories as:

With respect to the way of data access we can classify memories as: Memory Classification With respect to the way of data access we can classify memories as: - random access memories (RAM), - sequentially accessible memory (SAM), - direct access memory (DAM), - contents

More information

Choosing the Right NAND Flash Memory Technology

Choosing the Right NAND Flash Memory Technology Choosing the Right NAND Flash Memory Technology A Basic Introduction to NAND Flash Offerings Dean Klein Vice President of System Memory Development Micron Technology, Inc. Executive Summary A 75% increase

More information

Understanding endurance and performance characteristics of HP solid state drives

Understanding endurance and performance characteristics of HP solid state drives Understanding endurance and performance characteristics of HP solid state drives Technology brief Introduction... 2 SSD endurance... 2 An introduction to endurance... 2 NAND organization... 2 SLC versus

More information

NAND Basics Understanding the Technology Behind Your SSD

NAND Basics Understanding the Technology Behind Your SSD 03 Basics Understanding the Technology Behind Your SSD Although it may all look the same, all is not created equal: SLC, 2-bit MLC, 3-bit MLC (also called TLC), synchronous, asynchronous, ONFI 1.0, ONFI

More information

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial

More information

Solid State Technology What s New?

Solid State Technology What s New? Solid State Technology What s New? Dennis Martin, President, Demartek www.storagedecisions.com Agenda: Solid State Technology What s New? Demartek About Us Solid-state storage overview Types of NAND flash

More information

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance Alex Ho, Product Manager Innodisk Corporation Outline Innodisk Introduction Industry Trend & Challenge

More information

Benefits of Solid-State Storage

Benefits of Solid-State Storage This Dell technical white paper describes the different types of solid-state storage and the benefits of each. Jeff Armstrong Gary Kotzur Rahul Deshmukh Contents Introduction... 3 PCIe-SSS... 3 Differences

More information

Flash-optimized Data Progression

Flash-optimized Data Progression A Dell white paper Howard Shoobe, Storage Enterprise Technologist John Shirley, Product Management Dan Bock, Product Management Table of contents Executive summary... 3 What is different about Dell Compellent

More information

Algorithms and Methods for Distributed Storage Networks 3. Solid State Disks Christian Schindelhauer

Algorithms and Methods for Distributed Storage Networks 3. Solid State Disks Christian Schindelhauer Algorithms and Methods for Distributed Storage Networks 3. Solid State Disks Institut für Informatik Wintersemester 2007/08 Solid State Disks Motivation 2 10 5 1980 1985 1990 1995 2000 2005 2010 PRODUCTION

More information

Flash for Databases. September 22, 2015 Peter Zaitsev Percona

Flash for Databases. September 22, 2015 Peter Zaitsev Percona Flash for Databases September 22, 2015 Peter Zaitsev Percona In this Presentation Flash technology overview Review some of the available technology What does this mean for databases? Specific opportunities

More information

Impact of Flash Memory on Video-on-Demand Storage: Analysis of Tradeoffs

Impact of Flash Memory on Video-on-Demand Storage: Analysis of Tradeoffs Impact of Flash Memory on Video-on-Demand Storage: Analysis of Tradeoffs Moonkyung Ryu College of Computing Georgia Institute of Technology Atlanta, GA, USA mkryu@gatech.edu Hyojun Kim College of Computing

More information

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung)

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung) Using MLC NAND in Datacenters (a.k.a. Using Client SSD Technology in Datacenters) Tony Roug, Intel Principal Engineer SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA.

More information

A New Chapter for System Designs Using NAND Flash Memory

A New Chapter for System Designs Using NAND Flash Memory A New Chapter for System Designs Using Memory Jim Cooke Senior Technical Marketing Manager Micron Technology, Inc December 27, 2010 Trends and Complexities trends have been on the rise since was first

More information

Everything you need to know about flash storage performance

Everything you need to know about flash storage performance Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

September 25, 2007. Maya Gokhale Georgia Institute of Technology

September 25, 2007. Maya Gokhale Georgia Institute of Technology NAND Flash Storage for High Performance Computing Craig Ulmer cdulmer@sandia.gov September 25, 2007 Craig Ulmer Maya Gokhale Greg Diamos Michael Rewak SNL/CA, LLNL Georgia Institute of Technology University

More information

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and

More information

Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

More information

COS 318: Operating Systems. Storage Devices. Kai Li and Andy Bavier Computer Science Department Princeton University

COS 318: Operating Systems. Storage Devices. Kai Li and Andy Bavier Computer Science Department Princeton University COS 318: Operating Systems Storage Devices Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Today s Topics! Magnetic disks!

More information

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

EMC XtremSF: Delivering Next Generation Performance for Oracle Database White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Architecture Enterprise Storage Performance: It s All About The Interface.

Architecture Enterprise Storage Performance: It s All About The Interface. Architecture Enterprise Storage Performance: It s All About The Interface. A DIABLO WHITE PAPER APRIL 214 diablo-technologies.com Diablo_Tech Enterprise Storage Performance: It s All About The Architecture.

More information

NAND Flash Architecture and Specification Trends

NAND Flash Architecture and Specification Trends NAND Flash Architecture and Specification Trends Michael Abraham (mabraham@micron.com) NAND Solutions Group Architect Micron Technology, Inc. August 2012 1 Topics NAND Flash Architecture Trends The Cloud

More information

Performance Report Modular RAID for PRIMERGY

Performance Report Modular RAID for PRIMERGY Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers

More information

Trabajo 4.5 - Memorias flash

Trabajo 4.5 - Memorias flash Memorias flash II-PEI 09/10 Trabajo 4.5 - Memorias flash Wojciech Ochalek This document explains the concept of flash memory and describes it s the most popular use. Moreover describes also Microdrive

More information

Database Hardware Selection Guidelines

Database Hardware Selection Guidelines Database Hardware Selection Guidelines BRUCE MOMJIAN Database servers have hardware requirements different from other infrastructure software, specifically unique demands on I/O and memory. This presentation

More information

OBJECTIVE ANALYSIS WHITE PAPER MATCH FLASH. TO THE PROCESSOR Why Multithreading Requires Parallelized Flash ATCHING

OBJECTIVE ANALYSIS WHITE PAPER MATCH FLASH. TO THE PROCESSOR Why Multithreading Requires Parallelized Flash ATCHING OBJECTIVE ANALYSIS WHITE PAPER MATCH ATCHING FLASH TO THE PROCESSOR Why Multithreading Requires Parallelized Flash T he computing community is at an important juncture: flash memory is now generally accepted

More information

Understanding Flash SSD Performance

Understanding Flash SSD Performance Understanding Flash SSD Performance Douglas Dumitru CTO EasyCo LLC August 16, 2007 DRAFT Flash based Solid State Drives are quickly becoming popular in a wide variety of applications. Most people think

More information

Important Differences Between Consumer and Enterprise Flash Architectures

Important Differences Between Consumer and Enterprise Flash Architectures Important Differences Between Consumer and Enterprise Flash Architectures Robert Sykes Director of Firmware Flash Memory Summit 2013 Santa Clara, CA OCZ Technology Introduction This presentation will describe

More information

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications

More information

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion

More information

1 Storage Devices Summary

1 Storage Devices Summary Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious

More information

Efficient Flash Memory Read Request Handling Based on Split Transactions

Efficient Flash Memory Read Request Handling Based on Split Transactions Efficient Memory Handling Based on Split Transactions Bryan Kim, Eyee Hyun Nam, Yoon Jae Seong, Hang Jun Min, and Sang Lyul Min School of Computer Science and Engineering, Seoul National University, Seoul,

More information

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 Executive Summary Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 October 21, 2014 What s inside Windows Server 2012 fully leverages today s computing, network, and storage

More information

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing

More information

Input/output (I/O) I/O devices. Performance aspects. CS/COE1541: Intro. to Computer Architecture. Input/output subsystem.

Input/output (I/O) I/O devices. Performance aspects. CS/COE1541: Intro. to Computer Architecture. Input/output subsystem. Input/output (I/O) CS/COE1541: Intro. to Computer Architecture Input/output subsystem Sangyeun Cho Computer Science Department I/O connects User (human) and CPU (or a program running on it) Environment

More information

SLC vs MLC: Proper Flash Selection for SSDs in Industrial, Military and Avionic Applications. A TCS Space & Component Technology White Paper

SLC vs MLC: Proper Flash Selection for SSDs in Industrial, Military and Avionic Applications. A TCS Space & Component Technology White Paper SLC vs MLC: Proper Flash Selection for SSDs in Industrial, Military and Avionic Applications A TCS Space & Component Technology White Paper Introduction As with most storage technologies, NAND Flash vendors

More information

SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs

SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND -Based SSDs Sangwook Shane Hahn, Sungjin Lee, and Jihong Kim Department of Computer Science and Engineering, Seoul National University,

More information

89 Fifth Avenue, 7th Floor. New York, NY 10003. www.theedison.com 212.367.7400. White Paper. HP 3PAR Adaptive Flash Cache: A Competitive Comparison

89 Fifth Avenue, 7th Floor. New York, NY 10003. www.theedison.com 212.367.7400. White Paper. HP 3PAR Adaptive Flash Cache: A Competitive Comparison 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper HP 3PAR Adaptive Flash Cache: A Competitive Comparison Printed in the United States of America Copyright 2014 Edison

More information

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010 Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,

More information

DELL SOLID STATE DISK (SSD) DRIVES

DELL SOLID STATE DISK (SSD) DRIVES DELL SOLID STATE DISK (SSD) DRIVES STORAGE SOLUTIONS FOR SELECT POWEREDGE SERVERS By Bryan Martin, Dell Product Marketing Manager for HDD & SSD delltechcenter.com TAB LE OF CONTENTS INTRODUCTION 3 DOWNFALLS

More information

The Data Placement Challenge

The Data Placement Challenge The Data Placement Challenge Entire Dataset Applications Active Data Lowest $/IOP Highest throughput Lowest latency 10-20% Right Place Right Cost Right Time 100% 2 2 What s Driving the AST Discussion?

More information

FAST 11. Yongseok Oh <ysoh@uos.ac.kr> University of Seoul. Mobile Embedded System Laboratory

FAST 11. Yongseok Oh <ysoh@uos.ac.kr> University of Seoul. Mobile Embedded System Laboratory CAFTL: A Content-Aware Flash Translation Layer Enhancing the Lifespan of flash Memory based Solid State Drives FAST 11 Yongseok Oh University of Seoul Mobile Embedded System Laboratory

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Data Distribution Algorithms for Reliable. Reliable Parallel Storage on Flash Memories

Data Distribution Algorithms for Reliable. Reliable Parallel Storage on Flash Memories Data Distribution Algorithms for Reliable Parallel Storage on Flash Memories Zuse Institute Berlin November 2008, MEMICS Workshop Motivation Nonvolatile storage Flash memory - Invented by Dr. Fujio Masuoka

More information

RFLRU: A Buffer Cache Management Algorithm for Solid State Drive to Improve the Write Performance on Mixed Workload

RFLRU: A Buffer Cache Management Algorithm for Solid State Drive to Improve the Write Performance on Mixed Workload Engineering Letters, :, EL RFLRU: A Buffer Cache Management Algorithm for Solid State Drive to Improve the Write Performance on Mixed Workload Arul Selvan Ramasamy, and Porkumaran Karantharaj Abstract

More information

Introduction to I/O and Disk Management

Introduction to I/O and Disk Management Introduction to I/O and Disk Management 1 Secondary Storage Management Disks just like memory, only different Why have disks? Memory is small. Disks are large. Short term storage for memory contents (e.g.,

More information

NAND Flash Memories. Understanding NAND Flash Factory Pre-Programming. Schemes

NAND Flash Memories. Understanding NAND Flash Factory Pre-Programming. Schemes NAND Flash Memories Understanding NAND Flash Factory Pre-Programming Schemes Application Note February 2009 an_elnec_nand_schemes, version 1.00 Version 1.00/02.2009 Page 1 of 20 NAND flash technology enables

More information

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...

More information

Computer Systems Structure Main Memory Organization

Computer Systems Structure Main Memory Organization Computer Systems Structure Main Memory Organization Peripherals Computer Central Processing Unit Main Memory Computer Systems Interconnection Communication lines Input Output Ward 1 Ward 2 Storage/Memory

More information

Benchmarking Cassandra on Violin

Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract

More information

Spatial Data Management over Flash Memory

Spatial Data Management over Flash Memory Spatial Data Management over Flash Memory Ioannis Koltsidas 1 and Stratis D. Viglas 2 1 IBM Research, Zurich, Switzerland iko@zurich.ibm.com 2 School of Informatics, University of Edinburgh, UK sviglas@inf.ed.ac.uk

More information

Solid State Drive Technology

Solid State Drive Technology Technical white paper Solid State Drive Technology Differences between SLC, MLC and TLC NAND Table of contents Executive summary... 2 SLC vs MLC vs TLC... 2 NAND cell technology... 2 Write amplification...

More information

Flash Memory Technology in Enterprise Storage

Flash Memory Technology in Enterprise Storage NETAPP WHITE PAPER Flash Memory Technology in Enterprise Storage Flexible Choices to Optimize Performance Mark Woods and Amit Shah, NetApp November 2008 WP-7061-1008 EXECUTIVE SUMMARY Solid state drives

More information

Comparison of NAND Flash Technologies Used in Solid- State Storage

Comparison of NAND Flash Technologies Used in Solid- State Storage An explanation and comparison of SLC and MLC NAND technologies August 2010 Comparison of NAND Flash Technologies Used in Solid- State Storage By Shaluka Perera IBM Systems and Technology Group Bill Bornstein

More information

SLC vs MLC: Which is best for high-reliability apps?

SLC vs MLC: Which is best for high-reliability apps? SLC vs MLC: Which is best for high-reliability apps? Here's an examination of trade-offs, with an emphasis on how they affect the reliability of storage targeted at industrial, military and avionic applications.

More information

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1 Flash 101 Violin Memory Switzerland Violin Memory Inc. Proprietary 1 Agenda - What is Flash? - What is the difference between Flash types? - Why are SSD solutions different from Flash Storage Arrays? -

More information

DEPLOYING HYBRID STORAGE POOLS With Sun Flash Technology and the Solaris ZFS File System. Roger Bitar, Sun Microsystems. Sun BluePrints Online

DEPLOYING HYBRID STORAGE POOLS With Sun Flash Technology and the Solaris ZFS File System. Roger Bitar, Sun Microsystems. Sun BluePrints Online DEPLOYING HYBRID STORAGE POOLS With Sun Flash Technology and the Solaris ZFS File System Roger Bitar, Sun Microsystems Sun BluePrints Online Part No 820-5881-10 Revision 1.0, 10/31/08 Sun Microsystems,

More information

Speeding Up Cloud/Server Applications Using Flash Memory

Speeding Up Cloud/Server Applications Using Flash Memory Speeding Up Cloud/Server Applications Using Flash Memory Sudipta Sengupta Microsoft Research, Redmond, WA, USA Contains work that is joint with B. Debnath (Univ. of Minnesota) and J. Li (Microsoft Research,

More information

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit. Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how

More information

Memoright SSDs: The End of Hard Drives?

Memoright SSDs: The End of Hard Drives? Memoright SSDs: The End of Hard Drives? http://www.tomshardware.com/reviews/ssd-memoright,1926.html 9:30 AM - May 9, 2008 by Patrick Schmid and Achim Roos Source: Tom's Hardware Table of content 1 - The

More information

Boosting Database Batch workloads using Flash Memory SSDs

Boosting Database Batch workloads using Flash Memory SSDs Boosting Database Batch workloads using Flash Memory SSDs Won-Gill Oh and Sang-Won Lee School of Information and Communication Engineering SungKyunKwan University, 27334 2066, Seobu-Ro, Jangan-Gu, Suwon-Si,

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Flash Performance in Storage Systems. Bill Moore Chief Engineer, Storage Systems Sun Microsystems

Flash Performance in Storage Systems. Bill Moore Chief Engineer, Storage Systems Sun Microsystems Flash Performance in Storage Systems Bill Moore Chief Engineer, Storage Systems Sun Microsystems 1 Disk to CPU Discontinuity Moore s Law is out-stripping disk drive performance (rotational speed) As a

More information

SSD Performance Tips: Avoid The Write Cliff

SSD Performance Tips: Avoid The Write Cliff ebook 100% KBs/sec 12% GBs Written SSD Performance Tips: Avoid The Write Cliff An Inexpensive and Highly Effective Method to Keep SSD Performance at 100% Through Content Locality Caching Share this ebook

More information

Optimizing SQL Server Storage Performance with the PowerEdge R720

Optimizing SQL Server Storage Performance with the PowerEdge R720 Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced

More information

Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting

Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting Introduction Big Data Analytics needs: Low latency data access Fast computing Power efficiency Latest

More information

hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System

hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System Jinsun Suk and Jaechun No College of Electronics and Information Engineering Sejong University 98 Gunja-dong, Gwangjin-gu, Seoul

More information

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011 A Close Look at PCI Express SSDs Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011 Macro Datacenter Trends Key driver: Information Processing Data Footprint (PB) CAGR: 100%

More information

CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX

CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX Multiple Choice: 1. Processing information involves: A. accepting information from the outside world. B. communication with another computer. C. performing arithmetic

More information

Comparison of Hybrid Flash Storage System Performance

Comparison of Hybrid Flash Storage System Performance Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.

More information