OBJECTIVE ANALYSIS WHITE PAPER MATCH FLASH. TO THE PROCESSOR Why Multithreading Requires Parallelized Flash ATCHING
|
|
|
- Horatio Ellis
- 10 years ago
- Views:
Transcription
1 OBJECTIVE ANALYSIS WHITE PAPER MATCH ATCHING FLASH TO THE PROCESSOR Why Multithreading Requires Parallelized Flash T he computing community is at an important juncture: flash memory is now generally accepted as a way to significantly increase performance, but in order to take advantage of all of the throughput that flash offers, system developers must abandon certain classic concepts of storage. Flash memory is growing in popularity as a new layer in the memorystorage hierarchy. Flash is faster than HDD but slower than DRAM; it is cheaper than DRAM but more expensive than HDD. This is the magic combination that gives flash its appeal. One significant disadvantage, though, is that the distinction between memory and storage is very clear in conventional computing structures, with storage on one side of the storage controller and memory on the other. To date, flash has been relegated to the storage side of the storage controller, even though it is, in fact, memory. The problem with getting from here to there is that we are looking at computer architecture from an outdated vantage point, and this makes it difficult to see the clearest way to guarantee peak performance. How Did We Get Here? Figure 1.) Classic Approach to Increases In the past microprocessors vied to outdo each other by increasing clock rates. Each generation was faster than the next. Increase and Clock Frequencies This worked because a semiconductor phenomenon called Dennard Scaling made transistor speed scale with size, allowing each new process shrink to naturally result in faster speeds. It was relatively easy to balance computer performance in those days. The system designer would choose a processor clock frequency and would match that processor s speed with a DRAM interface that had been designed to fit. Figure 1 is a loose graphic depiction of this approach. To increase performance the clock speeds of both the processor and the memory were raised. Since the boxes in the diagram remain the same height then making both boxes taller by increasing their clock speeds will ensure that the systems performance remains balanced. usually received little attention because hard drives (HDDs) were significantly slower than the rest of the system and the storage controller always
2 provided significantly more bandwidth than even multiple HDDs could use. In the 1990s it became clear that this approach would become unfeasible within the next few years. A combination of leakage currents and reisitor-capacitor (RC) time constants limited the maximum speed at which a processor could run to roughly 2GHz, after which heat dissipation would become a significant issue. Indeed, processor clock speeds have reached a limit (See Figure 2.) Still, transistors continued to shrink, so more of them could be economically added to the processor chip. The designers dilemma was to find a way to improve processor performance with a growing number of transistors even if clock speeds had reached a stopping point. The result was to increase the number of cores in the processor chip rather than to use a single Becomes a processor running at its Bottleneck maximum speed, designers harnessed the power of multiple processors running in parallel. This created a new challenge. Most software at that point had been written under the assumption that it would run on a single Figure 2.) Clock Speeds Figure 3.) Using Parallel CPUs & Channels Revised Approach: Parallel s and Multiple Channels CPU. Intel and AMD, the two processor leaders, undertook the task of creating a computing environment around multiple processing. The two companies worked with compiler companies and other software toolmakers to ensure that these tools could create code to allow multiple concurrent tasks to Source: Stanford CPU Database: Source: Objective CPUdb.Stanford.edu Analysis 2014 run on separate CPU cores. Intel and AMD also trained the programmer community to think in terms of multithreading how could they maximize the number of tasks that could be run concurrently? The idea was for software to support as many separate concurrent tasks as possible so that it would run at a speed dictated by the number of available cores. Ideally, a program would run twice as fast on a dual-core processor as it would on a single-core processor, three times as fast on three cores, and so on, until the number of cores exceeded the number of tasks to be performed. Naturally this motivated programmers to explore ways to divide their programs into the largest number of threads that made sense, so that their product would always perform better if more cores became available.
3 At the same time server designers defined motherboards that amplified this concept, teaming two or four processor chips into dual-socket or quad-socket designs. Adding DRAM Channels for Bandwidth had to play a role in feeding this architecture s voracious hunger for data and instructions. Not only did DRAM interfaces migrate to SDRAM, then DDR, DDR2, and Multiple s Improve I/O to DDR3, but Interface multiple memory were added to the board supporting as many as four separate memory buses. This is depicted as multiple memory boxes in Figure 3. A Four-Socket (four processor chip) motherboard with four memory channels per socket would then have sixteen memory channels, each pumping out 64 bits (8 bytes) of data at double the 1,066MHz clock rate, amounting to a total memory bandwidth of 270GB/s. The move to four memory channels not only quadrupled memory/processor bandwidth, but it also allowed processors support memory arrays that were four times as large. Even with these larger memory arrays, today s ballooning datasets expanded beyond the server s maximum memory sizes, forcing the working data to be swapped into and out of HDD, and shifting the focus to the storage subsystem. A Disconnect with Figure 4.) Boosting Speed : A New Bottleneck I/O speeds didn t undergo the same speed ramp. There was really no reason to because of their mechanical nature HDD bandwidth is relatively limited and cannot be improved. Although spindle speeds did double with the advent of the enterprise HDD, this change significantly increased power consumption, and designers determined that even faster spindle speeds would be impractical simply due to issues of power dissipation. Since HDDs peaked at around 0.4MB/s parallel interfaces to the CPU would have made little difference. If the disk speed wouldn t improve, there was little reason to improve the speed of the storage controller. systems were accelerated not by improving the bandwidth between the disk system and the memory-processor complex, but by increasing the data transfer rate between the disk and the storage controller, which already had significantly more bandwidth than was available from HDDs. Common approaches were to use RAID or other striping mechanisms to parallelize the disk array. This is represented by the multiple very small boxes in Figure 4. When SSDs came along the game changed significantly. Think of it as adding taller boxes to the disk column of Figure 4. These taller boxes, representing the higher performance of the SSD were suddenly hampered by the storage controller, which could not service all of the bandwidth the SSD provided. RAID
4 cards had to be redesigned to handle all the bandwidth that was suddenly available in multiple-ssd systems. makers partially solved this problem by routing the SSD through the PCIe bus, which was originally conceived as a way to add coprocessors onto a system. This opened up bandwidth significantly, but still resulted in a bottleneck at the interface between the disk system and the processor-memory complex. Even with multiple PCIe lanes the system still suffers from significantly lower bandwidth than does the memory bus, mainly because there is only one PCIe bus per processor chip. This has prevented the flash from providing its best performance to the processor. Achieving Parallel with Flash DIMMs Notice that during this entire discussion nothing has been said about changing the system topology to optimize it for flash. Even when the flash has been moved to the PCIe bus, it is still on the storage side of the graphic, where the storage controller (and storage control software) slows it down. Users who break with convention and think of flash as a new kind of memory can get past this snag. What if flash could be added to the memory bus? Rather than adding a knot of highperformance storage on the slow side of the storage controller, several smaller blades of fast flash could be added to the system, each on a different memory bus. This would allow parallel access from the processor to the flash at speeds several times that of the storage controller. This is the way that DRAM bandwidth is made to match the needs of multiple processors, and this is exactly the way that flash should be used by anyone who wants to achieve the highest performance this technology can deliver. Figure 5 illustrates this approach. The small boxes in this diagram represent flash memory that has been moved (arrows) from the storage side of the system to the RAM side of the system. These boxes could all be RAIDed on the HDD side of the storage controller, or they could communicate with the processor via multiple PCIe channels, but the memory-processor interface between flash and the CPU chip is still significantly faster than either of these options. Breaking Up DIMMs for Bandwidth Figure 5.) Getting Past the Slow Interface Add to Channels Another interesting point is that the designer doesn t need to add all that much flash to each memory bus. As a rule of thumb tiers in the memory hierarchy should be ten times the size of the nextfaster tier. In a large server board the maximum memory size is four channels of 32GB each, or 128GB. The rule of thumb would indicate that each channel should have about 320GB of NAND flash. More realistically, the designer might reduce the sys-
5 tem s DRAM complement when adding flash, partly because they will be giving up a DIMM socket. Let s say the system actually used 16GB of DRAM and around 160GB of NAND flash. Such a trade-off is well warranted: Objective Analysis has performed benchmarks that show that systems with a small DRAM and flash solidly outperform systems with a large DRAM and no flash, when the dollar spent on flash plus DRAM is the same for both systems. Spreading smaller amounts of flash among the systems various memory buses should provide the highest throughput. Rather than use a single large 1TB SSD on a PCIe, SAS, or SATA interface, and rather than add a RAID of SSDs that communicate at a high speed through a PCIe-based RAID card, the optimum system would pepper ten times as many 200GB memory-channel storage (MCS) based flash DIMMs throughout the system right on the memory bus to allow the processor s multiple memory channels to gain access to flash memory s ultimate performance benefit. The amount of flash used would remain the same, but the speed at which it can be accessed would increase by more than an order of magnitude. In most cases the flash is added to the system in order to improve throughput at a reasonable cost, so the focus isn t on installing the largest amount of flash, but in achieving the speediest access to an affordable amount of flash. All-flash DIMMs are available today in the form of San s ULLtraDIMM or the IBM exflash DIMM memorychannel storage. These devices are both available in capacities of 200GB and 400GB and are block accessible, so that they can be treated as storage by existing software, yet accessed at memory speeds through the memory bus. Summary Flash memory has brought exciting improvements to computing performance, but it has not been able to perform to its maximum potential because current computer architecture calls for all storage to be placed behind a storage controller. Meanwhile, other parts of the system have been broken into a number of parallel paths in order to coax increasing performance out of the system without raising clock rates. This is how multicore processors evolved, with each supporting multiple memory channels. alone has failed to keep pace with this change. Now that flash is accepted as a storage layer designers need to further explore the way that it is put to work. Placing flash behind the storage controller slows it down. Modern flash-based DIMMs allow NAND flash to be added to the highest-speed channel in the system the memory bus. With the adoption of a bus-based parallel flash system we can expect to see performance increase significantly over the boost already being realized in systems that use SSD-based flash storage. Jim Handy, March 2014
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper
: Moving Storage to The Memory Bus A Technical Whitepaper By Stephen Foskett April 2014 2 Introduction In the quest to eliminate bottlenecks and improve system performance, the state of the art has continually
HP Z Turbo Drive PCIe SSD
Performance Evaluation of HP Z Turbo Drive PCIe SSD Powered by Samsung XP941 technology Evaluation Conducted Independently by: Hamid Taghavi Senior Technical Consultant June 2014 Sponsored by: P a g e
Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting
Emerging storage and HPC technologies to accelerate big data analytics Jerome Gaysse JG Consulting Introduction Big Data Analytics needs: Low latency data access Fast computing Power efficiency Latest
Scaling from Datacenter to Client
Scaling from Datacenter to Client KeunSoo Jo Sr. Manager Memory Product Planning Samsung Semiconductor Audio-Visual Sponsor Outline SSD Market Overview & Trends - Enterprise What brought us to NVMe Technology
enabling Ultra-High Bandwidth Scalable SSDs with HLnand
www.hlnand.com enabling Ultra-High Bandwidth Scalable SSDs with HLnand May 2013 2 Enabling Ultra-High Bandwidth Scalable SSDs with HLNAND INTRODUCTION Solid State Drives (SSDs) are available in a wide
Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator
WHITE PAPER Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com SAS 9 Preferred Implementation Partner tests a single Fusion
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance Hybrid Storage Performance Gains for IOPS and Bandwidth Utilizing Colfax Servers and Enmotus FuzeDrive Software NVMe Hybrid
Flash s Role in Big Data, Past Present, and Future OBJECTIVE ANALYSIS. Jim Handy
Flash s Role in Big Data, Past Present, and Future Jim Handy Tutorial: Fast Storage for Big Data Hot Chips Conference August 25, 2013 Memorial Auditorium Stanford University OBJECTIVE ANALYSIS OBJECTIVE
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology
SSDs and RAID: What s the right strategy Paul Goodwin VP Product Development Avant Technology SSDs and RAID: What s the right strategy Flash Overview SSD Overview RAID overview Thoughts about Raid Strategies
Intel Solid-State Drives Increase Productivity of Product Design and Simulation
WHITE PAPER Intel Solid-State Drives Increase Productivity of Product Design and Simulation Intel Solid-State Drives Increase Productivity of Product Design and Simulation A study of how Intel Solid-State
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
The Bus (PCI and PCI-Express)
4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Flash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
How NAND Flash Threatens DRAM
How NAND Flash Threatens DRAM Jim Handy OBJECTIVE ANALYSIS Outline Why even think about DRAM vs. NAND? The memory/storage hierarchy What benchmarks tell us What about 3D XPoint memory? The system of the
How System Settings Impact PCIe SSD Performance
How System Settings Impact PCIe SSD Performance Suzanne Ferreira R&D Engineer Micron Technology, Inc. July, 2012 As solid state drives (SSDs) continue to gain ground in the enterprise server and storage
Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
FUSION iocontrol HYBRID STORAGE ARCHITECTURE 1 WWW.FUSIONIO.COM
1 WWW.FUSIONIO.COM FUSION iocontrol HYBRID STORAGE ARCHITECTURE Contents Contents... 2 1 The Storage I/O and Management Gap... 3 2 Closing the Gap with Fusion-io... 4 2.1 Flash storage, the Right Way...
Accelerating Data Compression with Intel Multi-Core Processors
Case Study Predictive Enterprise Intel Xeon processors Intel Server Board Embedded technology Accelerating Data Compression with Intel Multi-Core Processors Data Domain incorporates Multi-Core Intel Xeon
Data Center Solutions
Data Center Solutions Systems, software and hardware solutions you can trust With over 25 years of storage innovation, SanDisk is a global flash technology leader. At SanDisk, we re expanding the possibilities
Random Access Memory (RAM) Types of RAM. RAM Random Access Memory Jamie Tees SDRAM. Micro-DIMM SO-DIMM
Random Access Memory (RAM) Sends/Receives data quickly between CPU This is way quicker than using just the HDD RAM holds temporary data used by any open application or active / running process Multiple
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
All-Flash Storage Solution for SAP HANA:
All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Preface 3 Why SanDisk?
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
Accelerating MS SQL Server 2012
White Paper Accelerating MS SQL Server 2012 Unleashing the Full Power of SQL Server 2012 in Virtualized Data Centers Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba Group Company 1
Input / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1
Introduction - 8.1 I/O Chapter 8 Disk Storage and Dependability 8.2 Buses and other connectors 8.4 I/O performance measures 8.6 Input / Ouput devices keyboard, mouse, printer, game controllers, hard drive,
Memoright SSDs: The End of Hard Drives?
Memoright SSDs: The End of Hard Drives? http://www.tomshardware.com/reviews/ssd-memoright,1926.html 9:30 AM - May 9, 2008 by Patrick Schmid and Achim Roos Source: Tom's Hardware Table of content 1 - The
RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29
RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant
Communicating with devices
Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.
Optimizing SQL Server Storage Performance with the PowerEdge R720
Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced
SOLID STATE DRIVES AND PARALLEL STORAGE
SOLID STATE DRIVES AND PARALLEL STORAGE White paper JANUARY 2013 1.888.PANASAS www.panasas.com Overview Solid State Drives (SSDs) have been touted for some time as a disruptive technology in the storage
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
ioscale: The Holy Grail for Hyperscale
ioscale: The Holy Grail for Hyperscale The New World of Hyperscale Hyperscale describes new cloud computing deployments where hundreds or thousands of distributed servers support millions of remote, often
Intel RAID SSD Cache Controller RCS25ZB040
SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster
White Paper Fujitsu PRIMERGY Servers RAID Controller Performance 2013
White Paper RAID Controller Performance 213 White Paper Fujitsu PRIMERGY Servers RAID Controller Performance 213 This technical documentation is aimed at the persons responsible for the disk I/O performance
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
The Use of Flash in Large-Scale Storage Systems. [email protected]
The Use of Flash in Large-Scale Storage Systems [email protected] 1 Seagate s Flash! Seagate acquired LSI s Flash Components division May 2014 Selling multiple formats / capacities today Nytro
Solid State Storage in Massive Data Environments Erik Eyberg
Solid State Storage in Massive Data Environments Erik Eyberg Senior Analyst Texas Memory Systems, Inc. Agenda Taxonomy Performance Considerations Reliability Considerations Q&A Solid State Storage Taxonomy
Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.
Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how
Server: Performance Benchmark. Memory channels, frequency and performance
KINGSTON.COM Best Practices Server: Performance Benchmark Memory channels, frequency and performance Although most people don t realize it, the world runs on many different types of databases, all of which
Architecture Enterprise Storage Performance: It s All About The Interface.
Architecture Enterprise Storage Performance: It s All About The Interface. A DIABLO WHITE PAPER APRIL 214 diablo-technologies.com Diablo_Tech Enterprise Storage Performance: It s All About The Architecture.
WHITE PAPER FUJITSU PRIMERGY SERVERS PERFORMANCE REPORT PRIMERGY BX620 S6
WHITE PAPER PERFORMANCE REPORT PRIMERGY BX620 S6 WHITE PAPER FUJITSU PRIMERGY SERVERS PERFORMANCE REPORT PRIMERGY BX620 S6 This document contains a summary of the benchmarks executed for the PRIMERGY BX620
The Data Placement Challenge
The Data Placement Challenge Entire Dataset Applications Active Data Lowest $/IOP Highest throughput Lowest latency 10-20% Right Place Right Cost Right Time 100% 2 2 What s Driving the AST Discussion?
Intel Xeon Processor E5-2600
Intel Xeon Processor E5-2600 Best combination of performance, power efficiency, and cost. Platform Microarchitecture Processor Socket Chipset Intel Xeon E5 Series Processors and the Intel C600 Chipset
A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011
A Close Look at PCI Express SSDs Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011 Macro Datacenter Trends Key driver: Information Processing Data Footprint (PB) CAGR: 100%
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
7 Real Benefits of a Virtual Infrastructure
7 Real Benefits of a Virtual Infrastructure Dell September 2007 Even the best run IT shops face challenges. Many IT organizations find themselves with under-utilized servers and storage, yet they need
Solid State Storage 101 An introduction to Solid State Storage. January 2009
Solid State Storage Initiative Solid State Storage 101 An introduction to Solid State Storage January 2009 SSSI Members and Authors: Neal Ekker, Texas Memory Systems Tom Coughlin, Coughlin Associates Jim
Solid State Technology What s New?
Solid State Technology What s New? Dennis Martin, President, Demartek www.storagedecisions.com Agenda: Solid State Technology What s New? Demartek About Us Solid-state storage overview Types of NAND flash
Solid State Drive Architecture
Solid State Drive Architecture A comparison and evaluation of data storage mediums Tyler Thierolf Justin Uriarte Outline Introduction Storage Device as Limiting Factor Terminology Internals Interface Architecture
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Memory Channel Storage ( M C S ) Demystified. Jerome McFarland
ory nel Storage ( M C S ) Demystified Jerome McFarland Principal Product Marketer AGENDA + INTRO AND ARCHITECTURE + PRODUCT DETAILS + APPLICATIONS THE COMPUTE-STORAGE DISCONNECT + Compute And Data Have
The Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1
Flash 101 Violin Memory Switzerland Violin Memory Inc. Proprietary 1 Agenda - What is Flash? - What is the difference between Flash types? - Why are SSD solutions different from Flash Storage Arrays? -
How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server
White Paper October 2014 Scaling MySQL Deployments Using HGST FlashMAX PCIe SSDs An HGST and Percona Collaborative Whitepaper Table of Contents Introduction The Challenge Read Workload Scaling...1 Write
NV-DIMM: Fastest Tier in Your Storage Strategy
NV-DIMM: Fastest Tier in Your Storage Strategy Introducing ArxCis-NV, a Non-Volatile DIMM Author: Adrian Proctor, Viking Technology [email: [email protected]] This paper reviews how Non-Volatile
Managing Data Center Power and Cooling
White PAPER Managing Data Center Power and Cooling Introduction: Crisis in Power and Cooling As server microprocessors become more powerful in accordance with Moore s Law, they also consume more power
Oracle Database Reliability, Performance and scalability on Intel Xeon platforms Mitch Shults, Intel Corporation October 2011
Oracle Database Reliability, Performance and scalability on Intel platforms Mitch Shults, Intel Corporation October 2011 1 Intel Processor E7-8800/4800/2800 Product Families Up to 10 s and 20 Threads 30MB
VDI Solutions - Advantages of Virtual Desktop Infrastructure
VDI s Fatal Flaw V3 Solves the Latency Bottleneck A V3 Systems White Paper Table of Contents Executive Summary... 2 Section 1: Traditional VDI vs. V3 Systems VDI... 3 1a) Components of a Traditional VDI
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation
Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS
Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS M ost storage vendors now offer all-flash storage arrays, and many modern organizations recognize the need for these highperformance
SQL Server 2014 Optimization with Intel SSDs
White Paper October 2014 Introducing memory extensions from Microsoft s* newest database product, and Intel SSD Data Center Family for PCIe.* Order Number: 331409-001US INFORMATION IN THIS DOCUMENT IS
Software-defined Storage at the Speed of Flash
TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing
SUN STORAGE F5100 FLASH ARRAY
SUN STORAGE F5100 FLASH ARRAY KEY FEATURES ACCELERATING DATABASE PERFORMANCE WITH THE WORLD S FASTEST SOLID- STATE FLASH ARRAY Unprecedented performance, power, and space efficiency World s first flash
Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Storage Class Memory and the data center of the future
IBM Almaden Research Center Storage Class Memory and the data center of the future Rich Freitas HPC System performance trends System performance requirement has historically double every 18 mo and this
Data Center Storage Solutions
Data Center Storage Solutions Enterprise software, appliance and hardware solutions you can trust When it comes to storage, most enterprises seek the same things: predictable performance, trusted reliability
Maximum performance, minimal risk for data warehousing
SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has
Performance Brief: MegaRAID SAS 9265/9285 Series
MegaRAID SAS 9265/9285 Series Performance Brief Performance Brief: MegaRAID SAS 9265/9285 Series Executive Summary PERFORMANCE SUMMARY n Measured IOPS surpass 200,000 IOPS n When used with MegaRAID FastPath
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Accelerating Database Applications on Linux Servers
White Paper Accelerating Database Applications on Linux Servers Introducing OCZ s LXL Software - Delivering a Data-Path Optimized Solution for Flash Acceleration Allon Cohen, PhD Yaron Klein Eli Ben Namer
Virtualization of the MS Exchange Server Environment
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
Flash Performance in Storage Systems. Bill Moore Chief Engineer, Storage Systems Sun Microsystems
Flash Performance in Storage Systems Bill Moore Chief Engineer, Storage Systems Sun Microsystems 1 Disk to CPU Discontinuity Moore s Law is out-stripping disk drive performance (rotational speed) As a
Hard Disk Drive vs. Kingston SSDNow V+ 200 Series 240GB: Comparative Test
Hard Disk Drive vs. Kingston Now V+ 200 Series 240GB: Comparative Test Contents Hard Disk Drive vs. Kingston Now V+ 200 Series 240GB: Comparative Test... 1 Hard Disk Drive vs. Solid State Drive: Comparative
Comparison of Hybrid Flash Storage System Performance
Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.
Architecting High-Speed Data Streaming Systems. Sujit Basu
Architecting High-Speed Data Streaming Systems Sujit Basu stream ing [stree-ming] verb 1. The act of transferring data to or from an instrument at a rate high enough to sustain continuous acquisition or
Price/performance Modern Memory Hierarchy
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card
Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Technology Paper Authored by Rick Stehno, Principal Database Engineer, Seagate Introduction Supporting high transaction
Flash Memory Technology in Enterprise Storage
NETAPP WHITE PAPER Flash Memory Technology in Enterprise Storage Flexible Choices to Optimize Performance Mark Woods and Amit Shah, NetApp November 2008 WP-7061-1008 EXECUTIVE SUMMARY Solid state drives
Measuring Cache and Memory Latency and CPU to Memory Bandwidth
White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary
Can Flash help you ride the Big Data Wave? Steve Fingerhut Vice President, Marketing Enterprise Storage Solutions Corporation
Can Flash help you ride the Big Data Wave? Steve Fingerhut Vice President, Marketing Enterprise Storage Solutions Corporation Forward-Looking Statements During our meeting today we may make forward-looking
PCI Express Overview. And, by the way, they need to do it in less time.
PCI Express Overview Introduction This paper is intended to introduce design engineers, system architects and business managers to the PCI Express protocol and how this interconnect technology fits into
