EFFICIENT GEAR-SHIFTING FOR A POWER-PROPORTIONAL DISTRIBUTED DATA-PLACEMENT METHOD



Similar documents
Energy Efficient MapReduce

Maximizing Hadoop Performance and Storage Capacity with AltraHD TM

Hadoop Architecture. Part 1

Chapter 7. Using Hadoop Cluster and MapReduce

THE HADOOP DISTRIBUTED FILE SYSTEM

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Reducer Load Balancing and Lazy Initialization in Map Reduce Environment S.Mohanapriya, P.Natesan

GraySort on Apache Spark by Databricks

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Performance Comparison of SQL based Big Data Analytics with Lustre and HDFS file systems

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

Reference Architecture and Best Practices for Virtualizing Hadoop Workloads Justin Murray VMware

Apache Hadoop. Alexandru Costan

Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.

Maximizing Hadoop Performance with Hardware Compression

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

Implement Hadoop jobs to extract business value from large and varied data sets

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

Enabling High performance Big Data platform with RDMA

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Lustre * Filesystem for Cloud and Hadoop *

Use of Hadoop File System for Nuclear Physics Analyses in STAR

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

Performance Analysis of Mixed Distributed Filesystem Workloads

GeoGrid Project and Experiences with Hadoop

A Novel Cloud Based Elastic Framework for Big Data Preprocessing

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda

MapReduce and Hadoop Distributed File System

IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE

Big Data and Apache Hadoop s MapReduce

Accelerating Cassandra Workloads using SanDisk Solid State Drives

Evaluating HDFS I/O Performance on Virtualized Systems

Mixing Hadoop and HPC Workloads on Parallel Filesystems

Contents Introduction... 5 Deployment Considerations... 9 Deployment Architectures... 11


Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000

Mining Large Datasets: Case of Mining Graph Data in the Cloud

Reduction of Data at Namenode in HDFS using harballing Technique

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April Page 1 of 12

Open Cloud System. (Integration of Eucalyptus, Hadoop and AppScale into deployment of University Private Cloud)

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Mobile Cloud Computing for Data-Intensive Applications

Performance and Energy Efficiency of. Hadoop deployment models

MapReduce and Hadoop Distributed File System V I J A Y R A O

Bringing Big Data Modelling into the Hands of Domain Experts

Dynamic Resource allocation in Cloud

Hadoop on a Low-Budget General Purpose HPC Cluster in Academia

Distributed Framework for Data Mining As a Service on Private Cloud

A Performance Analysis of Distributed Indexing using Terrier

Fault Tolerance in Hadoop for Work Migration

Accelerating and Simplifying Apache

Hadoop IST 734 SS CHUNG

IBM General Parallel File System (GPFS ) 3.5 File Placement Optimizer (FPO)

The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform

DEPLOYING AND MONITORING HADOOP MAP-REDUCE ANALYTICS ON SINGLE-CHIP CLOUD COMPUTER

Evaluating MapReduce and Hadoop for Science

Performance measurement of a Hadoop Cluster

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Characterizing Task Usage Shapes in Google s Compute Clusters

Benchmarking Hadoop & HBase on Violin

Performance Comparison of Intel Enterprise Edition for Lustre* software and HDFS for MapReduce Applications

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

METHOD OF A MULTIMEDIA TRANSCODING FOR MULTIPLE MAPREDUCE JOBS IN CLOUD COMPUTING ENVIRONMENT

Robust and Flexible Power-Proportional Storage

Energy Constrained Resource Scheduling for Cloud Environment

AlphaTrust PRONTO - Hardware Requirements

Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems

Data Management in the Cloud: Limitations and Opportunities. Annies Ductan

POSIX and Object Distributed Storage Systems

HDFS Federation. Sanjay Radia Founder and Hortonworks. Page 1

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

S 2 -RAID: A New RAID Architecture for Fast Data Recovery

Windows Server 2012 R2 Hyper-V: Designing for the Real World

HiBench Introduction. Carson Wang Software & Services Group

Quantum StorNext. Product Brief: Distributed LAN Client

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

Exploring RAID Configurations

Manifest for Big Data Pig, Hive & Jaql

Oracle Database 11g Comparison Chart

Evaluation Methodology of Converged Cloud Environments

Towards Energy Efficient Query Processing in Database Management System

Hadoop Size does Hadoop Summit 2013

CSE-E5430 Scalable Cloud Computing Lecture 2

Big Fast Data Hadoop acceleration with Flash. June 2013

Big Data Explained. An introduction to Big Data Science.

Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju

Web Technologies: RAMCloud and Fiz. John Ousterhout Stanford University

Hadoop Cluster Applications

Big Data - Infrastructure Considerations

Ground up Introduction to In-Memory Data (Grids)

Transcription:

EFFICIENT GEAR-SHIFTING FOR A POWER-PROPORTIONAL DISTRIBUTED DATA-PLACEMENT METHOD 2014/1/27 Hieu Hanh Le, Satoshi Hikida and Haruo Yokota Tokyo Institute of Technology

1.1 Background 2 Commodity-based distributed file systems are useful for efficient Big Data processing Hadoop Distributed File System (HDFS), Google File System Support MapReduce framework Gain good scalability Utilize a large number of nodes to store huge amount of data requested by data-intensive applications Also expand the power consumption of the storage system Power-aware file systems are increasingly moving towards power-proportional design

1.2 Power-proportional Storage System 2014/1/27 3 System should consume energy in proportion to amount of work performed [Barroso and Holzle, 2007] Set system s operation to multiple gears containing different number of data nodes Made possible by data placement methods 2-1 1-1 1-2 2-2 2-1 1-1 1-2 2-2 D1 D2 D3 High Gear D4 D1 migration D2 D3 D1 D4 Low Gear D4

2.1 Current Data Placement Methods 2014/1/27 4 Rabbit [ACM SOCC 2010], Sierra[ACM EuroSys 2011] The primary of the dataset are stored entirely in the group of Covering-set nodes which are always active The backups of the dataset are stored entirely in each group of noncovering-set nodes which can be inactive Group s 3-1 B1 2-1 B1 1-1 P1 Gear 1 Gear 2 Gear 3 1-2 P2 2-2 B2 3-2 B2 Group 1 1-1, 1-2 Group 2 2-1, 2-2 Group 3 3-1, 3-2 P: Primary data B: Backup data

2.2 Motivation 5 Efficient gear-shifting becomes vital in powerproportional system Deal with write requests when operating in a low gear Apply Write-Offloading [ACM Trans. On Storage, 2008] The data that should be written to deactivate nodes are temporally stored at activate nodes When the system shifts to a high gear, all the temporal data have to be transferred to the reactivated nodes according to policy Reduce performance degradation when shifts to higher gear Still an open issue

3. Goal and Approach 6 Propose Accordion A power-proportional data placement method with efficient gear-shifting Approach Control the power consumption of the system by dividing the nodes into separate groups Carefully design the data replication strategy to provide efficient gear-shifting Location of the primary data becomes the key to reduce the temporal data needed to be retransferred

4.1Data Replication Strategy in Accordion 7 Primary data Evenly distribute primary data to all of the nodes in the systems Backup data Replicate the data from outbound nodes to inbound nodes step-by-step Finally, apply the chained declustering policy at Convering-set nodes Enhance reliability at the lowest gear P1 Write D={P1, P2, P3, P4, P5, P6} Chained declustering replicate replicate 3-1 2-1 B6 B1 B5 B2 B4 B3 B1 B6 B1 B2 B5 P2 P3 P4 1-1 1-2 Gear 1 Gear 2 Gear 3 replicate B6 P5 replicate P6 2-2 3-2

4.2 Accordion vs. Other Methods 8 The amount of retransferred data is smaller than in other methods The retransferred data are the data that should be written to non-coveringset nodes P1 3-1 2-1 B1 B2 B3 B6 B1 B5 B2 B4 B3 B1 B6 B1 B2 B5 P2 P3 P4 B1 B2 B3 1-1 P1 P2 P3 B4 B5 B6 1-2 P4 P5 P6 B1 B2 B3 Accordion B6 P5 P6 2-2 3-2 B4 B5 B6 B4 B5 B6 Other methods (Rabbit, Sierra)

4.3 Accordion in Multiple-Gear File System 9 1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 Gear 1 to Gear 2 Gear 2 to Gear 3 Gear 3 to Gear 4 Gear 4 to Gear 5 Rabbit or Sierra (G=5) Accordion (G=5, scale=3) Accordion (G=5, scale=2) Accordion (G=5, scale=1) The amount of retransferred data becomes smaller when the system perform gear-shifting in a higher gear The number of inactive nodes is decreasing Scale: ratio of the number of nodes between two consecutive groups

4.4 The Skew in Data Distribution in Accordion 2014/1/27 10 An imbalance in the amount of data does not lead to a considerable problem Current load balancing mechanisms only make use of part of the capacity other than full of the capacity The load can be well distributed to all the nodes The same amount of data are served by each node Highly take advantage of parallelism in serving the request

4.5 Physical Data Placement on an Accordion 11 Deal with the mixture of primary and backup data Majorly only primary or backup data are accessed Accordion-with-Disk-Partition (Accordion-DP) Utilize partitioning technique to separate the data Improve the performance through reducing the seek time on disks B P B P P B B B P B Long seek time Disk partitioning B B B P P Accordion Accordion-DP Short seek time

5. Empirical Experiments 12 Develop a prototype of Accordion based on a modified HDFS which is able to perform: Different data placement methods Load balancing Writing requests in a low gear based on Write-Offloading Extensive experiments Evaluate the effectiveness of load balancing Verify the effectiveness of Accordion on efficient gearshifting Evaluate the power-proportionality of Accordion under different system configurations

5.1 Experiment Framework 13 Low power nodes Name CPU Memory NIC OS Java ASUS Eeebox EB100 TM8600 1.0GHz DRAM 4GB 1000Mb/s Linux 3.0 64bit JDK-1.7.0 Power consumption measurement HIOKI 3334

5.2 Effect of Load Balancing 14 Evaluate the throughput performance Methods: Accordion, Accordion-DP, Rabbit Throughput of the workload of reading a dataset Experiment environment # Gears 3 # Active nodes (Accordion, Accordion-DP) 2, 8, 20 # Active nodes (Rabbit) 2, 7, 21 # Files 420 File size 64 MB HDFS version 0.20.2 Block size 32 MB

5.2.1 Effect of Load Balancing 15 [MB/s] 250 200 Accordion-DP Accordion Rabbit Gear 3 better 150 100 50 Gear 1 Gear 2 Distribution of accessed time 0 0 5 10 15 20 25 # active nodes Average throughput Accordion-based methods gained better throughput, especially in Gear 3 due to better distribution of serving time The effective of Disk-partitioning was verified as Accordion-DP achieved better result than Accordion

5.3 Efficient Gear Shifting 16 Gear shifting scenario 1. Write a part of dataset (W1 files) 2-3 2-1 1-1 1-2 2-2 2-3 Gear 3 2. Write a part of dataset (W2 files) with Write- Offloading 2-1 1-1 1-2 2-2 Gear 2 3.1 Retransfer data 3.2 Read a whole dataset (W1+W2 files) 2-3 2-1 1-1 1-2 2-2 2-3 Gear 3

5.3.1Experiment Method 17 Evaluate the performance when the system shifts to a higher gear Execution time for retransferring data Effect of retransferring data to the performance Accordion-DP and Rabbit 4 configurations with the change in the ratio of W1 and W2 Configuration W1 (files) W2 (files) Without Updated Data 420 0 Small 350 70 Medium 280 140 Large 210 210

5.3.2 Data Retransferred Process 18 [s 450 ] 400 350 300 250 200 150 100 50 0 better 66% Rabbit Small Medium Large The execution time for retransferring temporal data Accordion-DP [MB] 16000 14000 12000 10000 8000 6000 4000 2000 The size of retransferred data Accordion-DP significantly reduced the execution time by up to 66% The data size of retransferred data in Accordion-DP was always smaller than in Rabbit 0 Small Medium Large better

5.3.3 Throughput of Reading Whole Dataset 19 better [MB/s] Rabbit Accordion-DP 250 200 150-60% 100 50 0 Without Updated Data Small Medium Large Throughput was significantly degraded due to data retransferred process by more than 60% Accordion-DP always gained better throughput by 30%

5.4 Effect of System Configuration 20 Evaluate the effects of different configuration of Accordion on the power-proportionality 3 gears with 3 configurations The throughput of reading 420 files workload The power consumption of storing nodes Configuration #active nodes Gear 1 #active nodes Gear 2 #active nodes Gear 3 Accordion-DP (4, 8, 12) 4 8 12 Accordion-DP (4, 12, 20) 4 12 20 Accordion-DP (8, 16, 24) 8 16 24

Throughput per Watt 21 5.4.1 Effect of System Configurations on Performance 2014/1/27 [MB/W] 1,4 1,2 1 0,8 0,6 0,4 0,2 0 Gear 1 Gear 2 Gear 3 Accordion-DP (4, 8, 16) Accordion-DP (4, 12, 20) Accordion-DP (8, 16, 24) Largest-scale configuration yielded best result due to the load at each node is the lightest The higher gears gained better results in case of the same active nodes The less complexity of primary and backup data

6.1 Conclusion 22 Propose an Accordion data placement for efficient gear-shifting Reduced the execution time required for data movement by 66% and improved the performance by 30% compared with Rabbit Ensured the smaller amount of reallocated data and achieve better power-proportionality Different in primary data location Utilize the partitioning technique to reduce the seek time at each node Shown the high potential for deployment in large-scale systems

6.2 Future Work 23 More experiment with different workloads and system configurations Larger the scale, larger the number of gears Improve the load balancing algorithm Integrate Accordion with other architecture than HDFS Consider a specific algorithm to deal with system failures

24 THANK YOU FOR YOUR ATTENTION Q&A 2014/1/27 2013 IEEE International Conference on BigData