Accelerating Big Data: Using SanDisk SSDs for Apache HBase Workloads



Similar documents
Accelerating Cassandra Workloads using SanDisk Solid State Drives

Accelerating Big Data: Using SanDisk SSDs for MongoDB Workloads

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

Accelerating Microsoft SQL Server 2014 Using Buffer Pool Extension and SanDisk SSDs

Flash Technology in the Data Center: Key Considerations for Customers

SanDisk/Kaminario Processing Rate of 7.2M Messages per day Indexed 100 Million s and Attachments Solves SanDisk ediscovery Challenges

Increasing Hadoop Performance with SanDisk Solid State Drives (SSDs)

Unexpected Power Loss Protection

CloudSpeed SATA SSDs Support Faster Hadoop Performance and TCO Savings

The Density and TCO Benefits of the Optimus MAX 4TB SAS SSD

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

SanDisk and Atlantis Computing Inc. Partner for Software-Defined Storage Solutions

Accelerate MySQL Open Source Databases with SanDisk Non-Volatile Memory File System (NVMFS) and Fusion iomemory SX300 PCIe Application Accelerators

SanDisk Solid State Drives (SSDs) for Big Data Analytics Using Hadoop and Hive

The Flash-Transformed Data Center: Flash Adoption Is Growing Across the Enterprise

Optimizing SQL Server Storage Performance with the PowerEdge R720

Dell Reference Configuration for Hortonworks Data Platform

Can Flash help you ride the Big Data Wave? Steve Fingerhut Vice President, Marketing Enterprise Storage Solutions Corporation

Accelerating Server Storage Performance on Lenovo ThinkServer

Fast, Low-Overhead Encryption for Apache Hadoop*

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

MS Exchange Server Acceleration

EMC Unified Storage for Microsoft SQL Server 2008

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

Benchmarking Cassandra on Violin

The Flash-Transformed Financial Data Center. Jean S. Bozman Enterprise Solutions Manager, Enterprise Storage Solutions Corporation August 6, 2014

Intel RAID SSD Cache Controller RCS25ZB040

NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB

Accelerating Database Applications on Linux Servers

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

The Flash Transformed Data Center & the Unlimited Future of Flash John Scaramuzzo Sr. Vice President & General Manager, Enterprise Storage Solutions

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

Benchmarking Hadoop & HBase on Violin

Data Center Storage Solutions

How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Intel Platform and Big Data: Making big data work for you.

CloudSpeed SATA SSDs Support Faster Hadoop Performance and TCO Savings

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

Accelerating MS SQL Server 2012

Maximum performance, minimal risk for data warehousing

Accelerate Oracle Backup Using SanDisk Solid State Drives (SSDs)

All-Flash Storage Solution for SAP HANA:

Dell Reference Configuration for DataStax Enterprise powered by Apache Cassandra

Real-Time Big Data Analytics SAP HANA with the Intel Distribution for Apache Hadoop software

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

An Oracle White Paper June Oracle Database Firewall 5.0 Sizing Best Practices

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator

Maximizing Hadoop Performance and Storage Capacity with AltraHD TM

EMC VFCACHE ACCELERATES ORACLE

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Milestone Solution Partner IT Infrastructure MTP Certification Report Scality RING Software-Defined Storage

Apache HBase. Crazy dances on the elephant back

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Analysis of VDI Storage Performance During Bootstorm

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario

Enabling the Flash-Transformed Data Center

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Data Center Solutions

Minimize cost and risk for data warehousing

Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)

Accelerating and Simplifying Apache

Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide. An Oracle White Paper October 2010

Integrated Grid Solutions. and Greenplum

Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E

Fusion iomemory iodrive PCIe Application Accelerator Performance Testing

SanDisk Memory Stick Micro M2

Identikey Server Performance and Deployment Guide 3.1

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Marvell DragonFly. TPC-C OLTP Database Benchmark: 20x Higher-performance using Marvell DragonFly NVCACHE with SanDisk X110 SSD 256GB

High Performance VDI using SanDisk SSDs, VMware s Horizon View and Virtual SAN A Deployment and Technical Considerations Guide

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

At-Scale Data Centers & Demand for New Architectures

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Enabling High performance Big Data platform with RDMA

Intel Solid-State Drives Increase Productivity of Product Design and Simulation

Accelerating Business Intelligence with Large-Scale System Memory

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Dell Enterprise Reporter 2.5. Configuration Manager User Guide

Can the Elephants Handle the NoSQL Onslaught?

Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III

At-Scale Data Centers & Demand for New Architectures

Yahoo! Cloud Serving Benchmark

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs

Virtuoso and Database Scalability

SanDisk s Enterprise-Class SSDs Accelerate ediscovery Access in the Data Center

Transcription:

WHITE PAPER Accelerating Big Data: Using SanDisk SSDs for Apache HBase Workloads December 2014 951 SanDisk Drive, Milpitas, CA 95035 2014 SanDIsk Corporation. All rights reserved www.sandisk.com

Table of Contents Executive Summary.... 3 Apache HBase.... 3 YCSB Benchmark.... 5 Test Design.... 5 Test Environment.... 5 Technical Component Specifications.... 6 Compute Infrastructure.... 6 Network Infrastructure.... 6 Storage Infrastructure.... 6 HBase configuration.... 7 Test Validation and Results Summary.... 7 Test Methodology.... 7 Update-Heavy Workload (Workload A).... 8 Read-Intensive Workload (Workload B)... 10 Read-Only Workload (Workload C)....12 Conclusion....14 Summary....14 References....15 2

Executive Summary In today s hyper-connected world, there is a significant amount of data being collected, and later analyzed, to make business decisions. This explosion of data has led to various analytics technologies that can operate on this Big Data, which stretches into the petabyte (PB) range within organizations, and into the zettabyte (ZB) range worldwide. Traditional database systems and data warehouses are being augmented with newer scale-out technologies like Apache Hadoop and with NoSQL databases like Apache HBase, Cassandra and MongoDB to manage the massive scale of data being processed today, and analyzed by the organizations that use these databases. This technical paper describes the benchmark testing conducted by SanDisk, using SanDisk SSDs in conjunction with an Apache HBase deployment. The testing uses the Yahoo! Cloud Serving Benchmark (YCSB), and and data sets running on a single-node Apache HBase deployment. This testing had a focus on update-heavy and read-heavy YCSB workloads. The primary goal of this paper is to show the benefits of using SanDisk solid-state drives (SSDs) within an HBase environment. As the testing demonstrated, SanDisk SSDs provided significant performance improvements when running I/O-intensive workloads, especially with a mixed workload having random read and write data accesses over mechanically driven hard disk drives (HDDs). For more information, see www.sandisk.com. SanDisk SSDs help boost the performance in HBase environments by providing 30x to 40x more transactions/sec (TPS), when compared to HDD configurations, and very low latencies of less than 50 seconds compared to > 1,000 seconds with HDDs. These performance improvements result in significantly improved efficiency of data-analytics operations by providing faster time to results, and thus providing cost savings related to reduced operational expenses for business organizations. Apache HBase Apache HBase is an open-source, distributed, scalable, big-data database. It is a non-relational database (also called a NoSQL database), unlike many of the traditional relational database management systems (RDBMS). Apache HBase uses the Hadoop Distributed File System (HDFS) in distributed mode. HBase can also be deployed in standalone mode, and in this case it uses the local file system. It is used for random, real-time read/write access to large quantities of sparse data. Apache HBase uses Log Structured Merge trees (LSM trees) to store and query the data. It features compression, in-memory caching, and very fast scans. Importantly, HBase tables can serve as both the input and output for MapReduce jobs. Some major features of HBase that have proven effective in supporting analytics workloads are: Linear and modular scalability Strictly consistent reads and writes Automatic and configurable sharding of tables, with shards being smaller and more manageable parts of large database tables 3

Support for fault tolerant and highly available processing, through using an automatic failover support between RegionServers Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables Uses the Java API for client access, as well as Thrift or REST gateway APIs, thus providing applications with a variety of access methods to the data Near-real-time query support Support for exporting metrics via the Hadoop metrics subsystem to files or the Ganglia open source cluster management and administration tool The HBase architecture defines two different storage layers, the MemStore and the StoreFile. An object is written into MemStore first, and when the MemStore is filled to capacity, a flush-to-disk operation is requested. This flush-to-disk operation results in the MemStore contents being written to disk. HBase performs compaction/compression during the flush-to-disk operations (which are more frequent for heavy write workloads), merging different StoreFiles together. HBase read requests will try to read an object first from the MemStore. In case of a read miss, the read operation is fulfilled from the StoreFile. With a heavy read workload, the number of concurrent read operations from the disk increases accordingly. The HBase storage architecture for a distributed HBase instance is shown in Figure 1. The figure shows the various components within HBase, including the HBase Master and the HBase Region Servers and it also shows how the various components access the underlying files residing in HDFS. Please visit some of the references in the References section for detailed information about the HBase Architecture. Client Zookeeper HMaster HRegionServer HRegionServer HRegion HRegion Hadoop HBase HLog Store StoreFile HFile DataNode MemStore Store MemStore StoreFile StoreFile HFile HFile DataNode DFS Client DataNode HLog Store MemStore StoreFile StoreFile HFile HFile DataNode Store MemStore StoreFile HFile DataNode DFS Client Figure 1: HBase Components and HDFS 4

YCSB Benchmark The Yahoo! Cloud Serving Benchmark (YCSB) consists of two components: The client, which generates the load according to a workload type and records the latency and throughput associated with that workload. The workload files, which define the workload type by describing the size of the data set, the total number of requests, the ratio of read and write queries. There are six major workload types in YCSB: 1. Update-Heavy Workload A: 50/50 update/read ratio 2. Read-Heavy Workload B: 5/95 update/read ratio 3. Read-Only Workload C: 100% read-only, maps closely to Workload B. 4. Read-Latest Workload D: 5/95 insert/read ratio, the read load is skewed towards the end of the key range, has similarities to Workload B. 5. Short-Ranges Workload E: 5/95 insert/read ratio, over a range of up to 100 records 6. Read/modify/write Workload F: 50/50 write/read ratio, similar to Workload A, but the writes are actual updates/modifications rather than just blind writes like in Workload A. For additional information about YCSB and workload types, please visit the official YCSB page, as listed in the References section of this paper. This technical paper describes the first three types of Workloads, A, B and C, and the latency and transactions-per-second results for these workloads using different storage configurations. These workloads provide a good mix of update-heavy and read-heavy test scenarios. Test Design A standalone HBase instance using the local file system was set up for the purpose of determining the benefits of using solid-state disks within an HBase environment. The testing was conducted with the YCSB benchmark, using different workloads (Workloads A, B and C) and different workload dataset sizes ( and ). The testing also included HBase, as it was deployed on different storage configurations. The transactions per second (TPS) and latency results for each of these tests were collected and analyzed. The results are summarized in the Test Validation and Results Summary section of this paper. Test Environment The test environment consisted of one Dell PowerEdge R720 server with two Intel Xeon CPUs (processors), with 12 cores per CPU) and 96GB of DRAM, which served as the HBase server, and one Dell PowerEdge R720, which served as a client to the test HBase server (a client to the test HBase server) running the YCSB workloads. For testing purposes, a 1GbE network interconnect was used between the server and the client. The local storage configuration on the HBase server varied between an all-hdd and an all-ssd configuration (more details are provided in the Test Methodology section of this paper). The dataset sizes used by the YCSB workloads on the clients were and. 5

Technical Component Specifications Hardware Software if applicable Purpose Quantity Dell Inc. PowerEdge R720 CentOS 5.10, 64-bit HBase server 1 Two Intel Xeon CPU E5-2620 0 @ 2GHz 96GB memory Apache HBase 2.4.9 Dell Inc. PowerEdge R720 CentOS 5.10, 64-bit YCSB client 1 Two Intel Xeon CPU E5-2620 0 @ 2GHz 96GB memory YCSB 0.1.4 Dell PowerConnect 2824 24-port switch 1GbE network switch Data Network 1 500GB 7.2K RPM Dell SATA HDDs Configured as a single RAID0 volume HBase local file system 6 480GB CloudSpeed Ascend SATA SSDs Configured as a single RAID0 volume Table 1: Hardware components HBase local file system 6 Software Version Purpose CentOS Linux 5.10 Operating system for server and client Apache HBase 2.4.9 HBase database server YCSB 0.1.4 Client workload benchmark Table 2: Software components Compute Infrastructure The Apache HBase server is a Dell PowerEdge R720 with two Intel Xeon 2.00GHz CPU E5-2620 and 96GB of memory. The client s compute infrastructure is the same as the server. Network Infrastructure The client and the HBase server are connected to a 1GbE network via the onboard 1GbE NICs using a Dell PowerConnect 2824 24-port switch. This network was used as the primary means of communication between the client server and the HBase server. Storage Infrastructure The Apache HBase server uses six 500GB 7.2K RPM Dell SATA HDDs in a RAID0 configuration for the all-hdd configuration. The HDDs are replaced by six 480GB CloudSpeed Ascend SATA SSDs for the all-ssd test configuration. The RAID0 volume is formatted with the XFS file system and mounted for use within HBase. 6

HBase configuration Most of the default HBase configuration parameters were used for testing purposes across all of the workloads and storage configurations under test. A few HBase configuration parameters were modified during the testing, and these are listed in the tables, below: Name of parameter Default Modified hbase.hregion.memstore.chunkpool.maxsize 0 0.5 hbase.hregion.max.filesize 1073741824 53687091200 hbase.regionserver.handler.count 10 150 Table 3: HBase configuration parameter for hbase-env.sh Name of parameter Default Modified hbase.hregion.memstore.chunkpool.maxsize 0 0.5 hbase.hregion.max.filesize 1073741824 53687091200 hbase.regionserver.handler.count 10 150 hbase.block.cache.size 0.2 0.4 hbase.hregion.memstore.mslab.enabled False True habse.hregion.memstore.mslab.maxallocation 262144 262144 Table 4: HBase configuration parameter for hbase-site.xml Test Validation and Results Summary Test Methodology The primary goal of this technical paper is to showcase the benefits of using SSDs within an HBase environment. To achieve this goal, SanDisk tested three main YCSB workloads (Workloads A, B and C) against multiple tests configurations. The distinct test configurations were set up by varying the underlying storage configuration for the standalone HBase server (using either an all-hdd or an all-ssd configuration), and the size of the data set (both and datasets were used in the testing). These storage configuration and data size variations are described below: Storage Configuration Storage configuration for the standalone HBase instance was varied between an all-hdd and an all-ssd configuration: 1. All-HDD configuration: The HBase server uses a single RAID0 volume with 6 HDDs for the single instance HBase. 2. All-SSD configuration: In this configuration, the HDDs of the first configuration are replaced with SSDs and a corresponding RAID0 volume for use with the HBase instance. 7

YCSB Workload Dataset Size Two YCSB dataset sizes were used: 1. dataset size: This particular dataset is smaller than the size of the available memory. As a result, all of the data is stored in-memory, thus avoiding disk I/O as much as possible. 2. dataset size: This dataset size is larger than the size of the available memory, therefore requiring more disk I/O operations as compared to the smaller dataset size. Update-Heavy Workload (Workload A) The YCSB Workload A is referred to as an update-heavy workload. It is a workload with a 50/50 ratio of updates and reads. The distribution of the update and read operations can be uniformly spread out across the entire data set uniform distribution or can be of type named, in which case a portion of the dataset is accessed more frequently than the rest of the dataset. A distribution is named after Zipf s Law, an empirical law of mathematical statistics, regarding statistical distribution of data. Workload A models the behavior of applications like a session store, where recent user actions are recorded. Throughput TPS Results The throughput results for YCSB Workload A are summarized in Figure 2. The X axis on the graph shows the different dataset sizes ( and ). The Y axis shows the throughput in terms of transactions per second. For each dataset size, the results for the uniform and distributions are shown for the all-hdd and all-ssd configuration via different colored bars (see legend in figures for more details). These same results are summarized in Table 5. Workload A Transactions per Second Transactions per Second 30,000 25,000 20,000 15,000 10,000 5,000 SSD HDD 0 Figure 2: Throughput comparison for Workload A 8

YCSB Workload Types Dataset Size YCSB Workload Distribution TPS All-HDD TPS All-SSD Workload A (50r/50w) 3,263 21,681 Workload A (50r/50w) 5,705 27,561 Workload A (50r/50w) 411 13,925 Workload A (50r/50w) 748 23,769 Table 5: Throughput results summary for Workload A Latency Results The read latency results for YCSB Workload A are summarized in Figure 3. The X axis on the graph shows the read latencies for the different dataset sizes ( and ), for both uniform and distributions, and the Y axis shows the latency in seconds. These same results are summarized in Table 6. The write latency results for YCSB Workload A are not shown in the figures. HBase uses a write-ahead log, which writes any updates/edits to memory, and these updates are later flushed to disk when the flush interval is reached. Due to the use of memory for updates/edits, the write latencies are between 0-1 milliseconds (ms), which get recorded as 0 seconds. As a result, the write latencies are not shown in the figures. They are, however, recorded in Table 6. Workload A Read Latency in Seconds Latency in Seconds 1,200 1,000 800 600 400 200 SSD HDD 0 Figure 3: Latency comparisons for Workload A Storage YCSB Workload YCSB Workload Types Configuration Distribution Read Write Read Write Workload A (50r/50w) HDD 505 0 >1,000 0 Workload A (50r/50w) SSD 36 0 38 0 Workload A (50r/50w) HDD 278 9 >1,000 0 Workload A (50r/50w) SSD 33 0 30 0 Table 6: Latency results summary for Workload A 9

Read-Intensive Workload (Workload B) YCSB Workloads B and C are read-intensive workloads. Workload B generates 95% read operations, with just 5% update operations in that workload. Workload C is a 100% read workload. Again, for both these workloads, the frequency of access across all parts of the dataset can be uniform, or a distribution of access can be used where part of the data is being accessed more frequently. These workloads simulate applications like photo-tagging, where most operations involve reading tags or user profile caches, and 100% of the workload is reading from the user profile caches. These types of workloads are being increasingly adopted today, especially for Web-enabled cloud-computing workloads and social media workloads. Throughput Results The throughput results for YCSB Workload B are summarized in Figure 4. The X axis on the graph shows the different dataset sizes ( and ). The Y axis shows the throughput in terms of transactions per second. For each dataset size, the results for the uniform and distributions are shown for the all-hdd and all-ssd configuration via different colored bars. These same results are summarized in Table 7. Workload B Transactions per Second Transactions per Second 25,000 20,000 15,000 10,000 5,000 SSD HDD 0 Figure 4: Throughput comparison for Workload B YCSB Workload Types Dataset Size YCSB Workload Distribution TPS All-HDD TPS All-SSD Workload B (95r/5w) 2,186 16,051 Workload B (95r/5w) 3,870 20,169 Workload B (95r/5w) 219 8,193 Workload B (95r/5w) 410 14,943 Table 7: Throughput results summary for Workload B 10

Latency results The read latency results for YCSB Workload B are summarized in Figure 5. The X axis on the graph shows the read latencies for the different dataset sizes ( and ), for both uniform and distributions, and the Y axis shows the latency in seconds. These same results are summarized in Table 8. The write latency results for YCSB Workload B are not shown in the figures. Since this particular workload is a read-heavy workload, there is a very small proportion of writes for Workload B. As a result, write latencies are not significant for this workload. For the small proportion of writes, HBase writes any updates/edits to memory, with latencies of 0-1 milliseconds (ms) which get recorded by YCSB as 0 seconds. Workload B Read Latency in Seconds Latency in Seconds 1,200 1,000 800 600 400 200 SSD HDD 0 Figure 5: Latency comparisons for Workload B Storage YCSB Workload YCSB Workload Types Configuration Distribution Read Write Read Write Workload B (50r/50w) HDD 427 0 >1,000 0 Workload B (50r/50w) SSD 29 0 37 0 Workload B (50r/50w) HDD 76 0 >1,000 0 Workload B (50r/50w) SSD 23 0 31 0 Table 8: Latency results summary for Workload B 11

Read-Only Workload (Workload C) Throughput Results The throughput results for YCSB Workload C are summarized in Figure 6. The X axis on the graph shows the different dataset sizes ( and ). The Y axis shows the throughput in terms of transactions per second. For each dataset size, the results for the uniform and distributions are shown for the all-hdd and all-ssd configuration via different colored bars (see the legend in the figures for more details). These same results are summarized in Table 9. Workload C Transactions per Second Transactions per Second 25,000 20,000 15,000 10,000 5,000 SSD HDD 0 Figure 6: Throughput comparison for Workload C YCSB Workload Types Dataset Size YCSB Workload Distribution TPS All-HDD TPS All-SSD Workload C (100r/0w) 1,865 15,334 Workload C (100r/0w) 3,582 19,756 Workload C (100r/0w) 209 7,799 Workload C (100r/0w) 373 14,214 Table 9: Throughput results summary for Workload C 12

Latency Results The read latency results for YCSB Workload C are summarized in Figure 7. The X axis on the graph shows the read latencies for the different dataset sizes ( and ), for both uniform and distributions, and the Y axis shows the latency in seconds. These same results are summarized in Table 10. The write latency results for YCSB Workload C are not shown in the figures. Since this particular workload is a read-only workload, there are no write operations in Workload C. As a result, write latencies are not applicable for this workload. Workload C Read Latency in Seconds Latency in Seconds 1,200 1,000 800 600 400 200 SSD HDD 0 Figure 7: Latency comparisons for Workload C Storage YCSB Workload YCSB Workload Types Configuration Distribution Read Write Read Write Workload C (50r/50w) HDD 463 0 >1,000 0 Workload C (50r/50w) SSD 28 0 37 0 Workload C (50r/50w) HDD 124 0 >1,000 0 Workload C (50r/50w) SSD 22 0 31 0 Table 10: Latency results summary for Workload C 13

Conclusion The key takeaways from the YCSB tests for Apache HBase, comparing all-hdd and all-ssd storage configurations, are as follows: 1. SanDisk SSDs show a dramatic increase (30-40x) in number of transactions per second for updateheavy and read-heavy workloads for different types of data access, whether uniform across the dataset or targeted to only a portion of the dataset. This translates to a higher rate of query processing in a variety of different data analytics applications, 2. SanDisk SSDs also benefit HBase workloads with significantly lower latencies than mechanically driven HDDs. Lower latencies allow quicker query-response time, therefore increasing the productivity and efficiency of analytics operations. 3. Following is a graphical summation of the key takeaways for this SanDisk test of the server/storage platform supporting the Apache HBase workload: 30-40x Transactions Per Second with SanDisk SSDs SanDisk SSDs within the HBase environment provide dramatic improvements (30x 40x) in transactions/second when compared to HDDs. Improved performance seen for update-heavy workloads as well as read-heavy workloads, for uniform and zipfian distribution. Significantly lower latencies (< 50 seconds) with SanDisk SSDs SanDisk SSDs consistently provide read latencies of < 50 seconds, whereas with HDDs, the latency is consistently > 1000 seconds (for the data sets). Lower latencies translate to much quicker response times for queries. Summary The higher transactions/second and lower latency results seen with CloudSpeed SSDs directly translate to faster query processing, thus reducing the time-to-results. In an organization s IT department, this results in significant improvements in business process efficiency and therefore cost-savings. The tests described in this paper clearly demonstrate the immense benefits of deploying SanDisk s CloudSpeed SATA solid-state drives (SSDs) drives within HBase environments. Using CloudSpeed drives will help deploy a cost-effective and process-efficient data analytics environment by providing increased throughput and reduced latency for data analytics queries. It is important to understand that workloads will vary significantly in terms of their I/O access patterns. Therefore, all workloads many not benefit equally from the use of SSDs for an HBase deployment. For the most effective deployments, customers will need to develop a proof of concept for their deployment and workloads, so that they can evaluate the benefits that SSDs can provide to their organization. 14

References SanDisk Corporate website: www.sandisk.com Apache Hadoop Distributed File System: http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html Apache HBase: http://hbase.apache.org HortonWorks HBase: http://hortonworks.com/hadoop/hbase/ Yahoo LABS Yahoo! Cloud Serving Benchmark: http://research.yahoo.com/web_information_management/ycsb/ HBase Storage Architecture: http://files.meetup.com/1228907/nyc%20hadoop%20meetup%20-%20introduction%20to%20hbase.pdf HBase performance writes: http://hbase.apache.org/book/perf.writing.html 15

Products, samples and prototypes are subject to update and change for technological and manufacturing purposes. SanDisk Corporation general policy does not recommend the use of its products in life support applications wherein a failure or malfunction of the product may directly threaten life or injury. Without limitation to the foregoing, SanDisk shall not be liable for any loss, injury or damage caused by use of its products in any of the following applications: Special applications such as military related equipment, nuclear reactor control, and aerospace Control devices for automotive vehicles, train, ship and traffic equipment Safety system for disaster prevention and crime prevention Medical-related equipment including medical measurement device Accordingly, in any use of SanDisk products in life support systems or other applications where failure could cause damage, injury or loss of life, the products should only be incorporated in systems designed with appropriate redundancy, fault tolerant or back-up features. Per SanDisk Terms and Conditions of Sale, the user of SanDisk products in life support or other such applications assumes all risk of such use and agrees to indemnify, defend and hold harmless SanDisk Corporation and its affiliates against all damages. Security safeguards, by their nature, are capable of circumvention. SanDisk cannot, and does not, guarantee that data will not be accessed by unauthorized persons, and SanDisk disclaims any warranties to that effect to the fullest extent permitted by law. This document and related material is for information use only and is subject to change without prior notice. SanDisk Corporation assumes no responsibility for any errors that may appear in this document or related material, nor for any damages or claims resulting from the furnishing, performance or use of this document or related material. SanDisk Corporation explicitly disclaims any express and implied warranties and indemnities of any kind that may or could be associated with this document and related material, and any user of this document or related material agrees to such disclaimer as a precondition to receipt and usage hereof. EACH USER OF THIS DOCUMENT EXPRESSLY WAIVES ALL GUARANTIES AND WARRANTIES OF ANY KIND ASSOCIATED WITH THIS DOCUMENT AND/OR RELATED MATERIALS, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR INFRINGEMENT, TOGETHER WITH ANY LIABILITY OF SANDISK CORPORATION AND ITS AFFILIATES UNDER ANY CONTRACT, NEGLIGENCE, STRICT LIABILITY OR OTHER LEGAL OR EQUITABLE THEORY FOR LOSS OF USE, REVENUE, OR PROFIT OR OTHER INCIDENTAL, PUNITIVE, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES, INCLUDING WITHOUT LIMITATION PHYSICAL INJURY OR DEATH, PROPERTY DAMAGE, LOST DATA, OR COSTS OF PROCUREMENT OF SUBSTITUTE GOODS, TECHNOLOGY OR SERVICES. No part of this document may be reproduced, transmitted, transcribed, stored in a retrievable manner or translated into any language or computer language, in any form or by any means, electronic, mechanical, magnetic, optical, chemical, manual or otherwise, without the prior written consent of an officer of SanDisk Corporation. All parts of the SanDisk documentation are protected by copyright law and all rights are reserved. For more information, please visit: www.sandisk.com/enterprise At SanDisk, we re expanding the possibilities of data storage. For more than 25 years, SanDisk s ideas have helped transform the industry, delivering next generation storage solutions for consumers and businesses around the globe. 951 SanDisk Drive Milpitas CA 95035 USA 2014 SanDisk Corporation. All rights reserved. SanDisk is a trademark of SanDisk Corporation, registered in the United States and other countries. CloudSpeed and CloudSpeed Ascend are trademarks of SanDisk Enterprise IP LLC. Other brand names mentioned herein are for identification purposes only and may be the trademarks of their holder(s). 121014