Experiences with Lustre* and Hadoop*



Similar documents
Hadoop* on Lustre* Liu Ying High Performance Data Division, Intel Corporation

HiBench Introduction. Carson Wang Software & Services Group

Hadoop MapReduce over Lustre* High Performance Data Division Omkar Kulkarni April 16, 2013

Performance Comparison of SQL based Big Data Analytics with Lustre and HDFS file systems

Apache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.

Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7

Big Fast Data Hadoop acceleration with Flash. June 2013

Maximizing Hadoop Performance and Storage Capacity with AltraHD TM

Lustre * Filesystem for Cloud and Hadoop *

Performance Comparison of Intel Enterprise Edition for Lustre* software and HDFS for MapReduce Applications

Mambo Running Analytics on Enterprise Storage

Quick Reference Selling Guide for Intel Lustre Solutions Overview

Deploying Hadoop on Lustre Sto rage:

MapReduce and Lustre * : Running Hadoop * in a High Performance Computing Environment

ATLAS Tier 3

Benchmarking Hadoop & HBase on Violin

Benchmarking Sahara-based Big-Data-as-a-Service Solutions. Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

Evaluating HDFS I/O Performance on Virtualized Systems

Hadoop Lab - Setting a 3 node Cluster. Java -

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week

Bright Cluster Manager

Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters

Hadoop on the Gordon Data Intensive Cluster

Deploy and Manage Hadoop with SUSE Manager. A Detailed Technical Guide. Guide. Technical Guide Management.

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

Enabling High performance Big Data platform with RDMA

Pivotal HD Enterprise 1.0 Stack and Tool Reference Guide. Rev: A03

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

CDH 5 Quick Start Guide

How To Install Hadoop From Apa Hadoop To (Hadoop)

Intel HPC Distribution for Apache Hadoop* Software including Intel Enterprise Edition for Lustre* Software. SC13, November, 2013

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

The Cray Framework for Hadoop for the Cray XC30

This handout describes how to start Hadoop in distributed mode, not the pseudo distributed mode which Hadoop comes preconfigured in as on download.

Understanding Hadoop Performance on Lustre

Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box

Use of Hadoop File System for Nuclear Physics Analyses in STAR

HADOOP PERFORMANCE TUNING

HiBench Installation. Sunil Raiyani, Jayam Modi

How to Install and Configure EBF15328 for MapR or with MapReduce v1

Dell Reference Configuration for Hortonworks Data Platform

Hadoop & Spark Using Amazon EMR

Oracle Database - Engineered for Innovation. Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Constructing a Data Lake: Hadoop and Oracle Database United!

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Data Analytics. CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL. All rights reserved.

GraySort and MinuteSort at Yahoo on Hadoop 0.23

A Framework for Performance Analysis and Tuning in Hadoop Based Clusters

Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Apache Hadoop Storage Provisioning Using VMware vsphere Big Data Extensions TECHNICAL WHITE PAPER

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

A Performance Analysis of Distributed Indexing using Terrier

Kognitio Technote Kognitio v8.x Hadoop Connector Setup

News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

Adobe Deploys Hadoop as a Service on VMware vsphere

Big Data Too Big To Ignore

NIST/ITL CSD Biometric Conformance Test Software on Apache Hadoop. September National Institute of Standards and Technology (NIST)

Keyword: YARN, HDFS, RAM

HADOOP MOCK TEST HADOOP MOCK TEST I

Deploying Hadoop on SUSE Linux Enterprise Server

Moving From Hadoop to Spark

Hadoop Applications on High Performance Computing. Devaraj Kavali

Extended Attributes and Transparent Encryption in Apache Hadoop

Pivotal HD Enterprise

Lustre Monitoring with OpenTSDB

Introduction. Various user groups requiring Hadoop, each with its own diverse needs, include:

Energy Efficient MapReduce

Extending Hadoop beyond MapReduce

Survey of the Benchmark Systems and Testing Frameworks For Tachyon-Perf

The Greenplum Analytics Workbench

A Study of Data Management Technology for Handling Big Data

Quick Deployment Step-by-step instructions to deploy Oracle Big Data Lite Virtual Machine

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Apache Hadoop: Past, Present, and Future

Unlocking the Intelligence in. Big Data. Ron Kasabian General Manager Big Data Solutions Intel Corporation

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Cray XC30 Hadoop Platform Jonathan (Bill) Sparks Howard Pritchard Martha Dumler

Big Data Management and Security

Reference Architecture and Best Practices for Virtualizing Hadoop Workloads Justin Murray VMware

The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - -

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

BlobSeer: Towards efficient data storage management on large-scale, distributed systems

Developing High-Performance, Scalable, cost effective storage solutions with Intel Cloud Edition Lustre* and Amazon Web Services

Hadoop IST 734 SS CHUNG

Hadoop: Embracing future hardware

EMC Isilon Scale-Out Data Lake Foundation Essential Capabilities for Building Big Data Infrastructure

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

IBM General Parallel File System (GPFS ) 3.5 File Placement Optimizer (FPO)


Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000

CactoScale Guide User Guide. Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB)

Installing Hadoop over Ceph, Using High Performance Networking

Sujee Maniyam, ElephantScale

Transcription:

Experiences with Lustre* and Hadoop* Gabriele Paciucci (Intel) June, 2014 Intel * Some Con fidential name Do Not Forward and brands may be claimed as the property of others.

Agenda Overview Intel Enterprise Edition for Lustre 2.0 How to con figure Cloudera Hadoop with Lustre using HAL Benchmarks results Conclusion

Intel Enterprise Edition for Lustre v 2.0 Hierarchical Storage Management Tiered storage Intel Manager for Lustre REST API Management and Monitoring Services CLI Hadoop Connectors Lustre File System Full distribution of open source Lustre software v2.5 Storage Plug-In Array Integration Global Technical Support from Intel open source Intel value-add

Hadoop Connectors for Lustre (HAL and HAM) Hierarchical Storage Management Tiered storage Intel Manager for Lustre REST API Management and Monitoring Services CLI Hadoop Connectors Lustre File System Full distribution of open source Lustre software v2.5 Storage Plug-In Array Integration Global Technical Support from Intel open source Intel value-add

Hadoop Adapter for Lustre (HAL) Replace HDFS with Lustre Plugin for Apache Hadoop 2.3 and CDH 5.0.1 No changes to Lustre needed Enable HPC environments to use MapReduce v2 with existing data in place Allow Hadoop environments to migrate to a general purpose file system org.apache.hadoop.fs FileSystem RawLocalFileSystem LustreFileSystem

Hpc Adapter for Mapreduce (HAM) Replace YARN with Slurm Plugin for Apache Hadoop 2.3 and CDH 5.0.1 No changes to Lustre needed Enable HPC environments to use Slurm as scheduler for MapReduce v2 jobs Allow Hadoop environments to migrate to a more sophisticated scheduler org.apache.hadoop.fs FileSystem RawLocalFileSystem LustreFileSystem

7 Why Use Lustre for Hadoop? Convergence of HPC and data analytics Desire for HPC systems to run Hadoop workloads Hadoop is the most popular software stack for big data analytics Lustre is the file system of choice for HPC clusters But, HDFS expects nodes with locally attached disks Most HPC clusters are diskless compute nodes Bene fits of using Lustre for Hadoop applications Improved application performance without changes to app Management simplicity lowers costs More ef ficient and productive storage resources No data transfer overhead for staging inputs and extracting results No need to partition storage into HPC (Lustre) and Analytics (HDFS)

Our Setup Overview 1. Preparation 2. Install and setup Intel Enterprise Edition Lustre 2.0 3. Mount Lustre on Hadoop s compute nodes 4. Install and setup Cloudera Hadoop Distribution 5.0.1 and HAL (included in IEEL 2.0) 5. Direct Hadoop IOs to Lustre instead of HDFS

Preparation Consistent UID and GID, especially for the Hadoop users - The best way is to setup the global naming server and connect Lustre server and Hadoop server there. - For our lab, we used these commands: #groupadd -g 499 zookeeper #groupadd -g 495 mapred #groupadd -g 497 yarn #groupadd -g 498 hadoop #groupadd -g 496 hdfs #adduser -u 496 -g 496 hdfs #adduser -u 495 -g 495 mapred #adduser -u 497 -g 497 yarn #adduser -u 498 -g 499 zookeeper #groupmems -g hadoop -a yarn; #groupmems -g hadoop -a mapred; #groupmems -g hadoop -a hdfs; #usermod -d /var/lib/hadoop-yarn -s /bin/bash yarn; #usermod -d /var/lib/hadoop-mapreduce -s /bin/bash mapred; #usermod -d /var/lib/hadoop-hdfs -s /bin/bash hdfs #usermod -d /var/run/zookeeper -s /sbin/nologin zookeeper

Install and Setup Intel Enterprise Edition Lustre 2.0 Intel Manager for Lustre is the value added software tool to manage and monitor Intel Enterprise Edition for Lustre 2.0 IML simplify the complexity of Lustre IML installed and con figured in managed mode to collect throughput data We used Lustre version 2.5.15 shipped with IEEL 2.0 RC2

Mount Lustre on Hadoop s compute nodes On all of Hadoop nodes, we mounted the same Lustre file system before installing any Hadoop software. # mount -t lustre 192.168.1.39@o2ib:/lustre /scratch/ # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 444G 22G 400G 6% / 192.168.1.39@o2ib0:/lustrefs 1.6T 160G 1.4T 11% /scratch

Install and Setup Cloudera Hadoop Distribution 5.0.1 and HAL (included in IEEL 2.0) On all Hadoop s compute nodes: Installed Oracle JDK version 1.7.0_45 and set the JAVA_HOME Installed CDH repository in rpm format Based on the speci fic role of each Hadoop s compute node, we installed the following packages: Role Resource Manager History Server Node Manager Command used to install yum install hadoop-yarn-resourcemanager yum install hadoop-mapreduce-historyserver hadoop-yarnproxyserver yum install hadoop-yarn-nodemanager hadoop-mapreduce

Install and Setup Cloudera Hadoop Distribution 5.0.1 and HAL (included in IEEL 2.0) On one single Hadoop s compute node, we set permissions on the hadoop root directory: # mkdir /scratch/hadoop # chmod 0777 /scratch/hadoop # setfacl -R -m group:hadoop:rwx /scratch/hadoop # setfacl -R -d -m group:hadoop:rwx /scratch/hadoop On all Hadoop s compute nodes: # cp -r /etc/hadoop/conf.hdfs /etc/hadoop/conf.lustre # alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.lustre 60 # alternatives --set hadoop-conf /etc/hadoop/conf.lustre On all Hadoop s compute nodes: # cp./ieel-2.0.0/hadoop/hadoop-lustre-plugin-2.3.0.jar /usr/lib/hadoop/

CDH 5.0.1 con figuration for Lustre CORE-SITE.XML Property name Value Description fs.defaultfs lustre:// Con figure Hadoop to use Lustre as the default file system. fs.lustre.impl fs.abstractfilesystem.lustre.im pl org.apache.hadoop.fs.lustrefile System org.apache.hadoop.fs.lustrefile System$LustreFs Con figure Hadoop to use Lustre Filesystem Con figure Hadoop to use Lustre class fs.root.dir /scratch/hadoop Hadoop root directory on Lustre mount point.

CDH 5.0.1 con figuration for Lustre MAPRED-SITE.XML Property name Value Description mapreduce.map.speculative mapreduce.reduce.speculative mapreduce.job.map.output.collect or.class mapreduce.job.reduce.shuf fle.con sumer.plugin.class false false org.apache.hadoop.mapred.share dfsplugins$mapoutputbuffer org.apache.hadoop.mapred.share dfsplugins$shuf fle Turn off map tasks speculative execution (this is incompatible with Lustre currently) Turn off reduce tasks speculative execution (this is incompatible with Lustre currently) De fines the MapOutputCollector implementation to use, speci fically for Lustre, for shuf fle phase Name of the class whose instance will be used to send shuf fle requests by reducetasks of this job

Test HAL and run Hadoop s services From one node: # sudo -u hdfs hadoop fs -mkdir -p /test1/test2 # sudo -u hdfs hadoop fs -ls /test* Found 1 items drwxrwxrwx - hdfs hdfs # ls -l /scratch/hadoop/test* total 4 drwxrwxrwx+ 2 hdfs hdfs 4096 Jun 9 18:05 test2 4096 2014-06-09 18:05 /test1/test2 Based on the speci fic role of each Hadoop s compute node, we started these services: Role Resource Manager History Server Node Manager Services to start service hadoop-yarn-resourcemanager start service hadoop-mapreduce-historyserver start service hadoop-yarn-nodemanager start

Benchmark We used iozone as baseline. Iozone is a widely adopted benchmark tools for traditional HPC The DFSIO benchmark is a read and write test for Hadoop s filesystem. It is helpful for tasks such as stress testing Hadoop s filesystem, to discover performance bottlenecks in your network, to shake out the hardware, OS and Hadoop setup of your cluster machines and to give you a first impression of how fast your cluster is in terms of IO. The TeraSort benchmark is probably the most well-known Hadoop benchmark. Basically, the goal of TeraSort is to sort 1TB of data (or any other amount of data you want) as fast as possible. It is a benchmark that combines testing the Hadoop s fs and MapReduce layers of an Hadoop cluster. As such it is not surprising that the TeraSort benchmark suite is often used in practice, which has the added bene fit that it allows us among other things to compare the results of our own cluster with the clusters of other people.

Baseline using iozone In the chart a screenshot from IML of the aggregate throughput of Lustre during the iozone benchmark We achieved a peak performance of 3.4GB/sec writing and 4.59GB/sec reading Our goal is to achieve the same performance using Hadoop and HAL

Pure Hadoop s fs benchmark : DFSIO In the chart a screenshot from IML of the aggregate throughput of Lustre during the DFSIO benchmark We used 72 map tasks (8 maps each compute nodes) and 10GB data each map task We achieved a peak performance of 3.28GB/sec writing and 5.42GB/sec reading

Fill the I/O throughput with few clients Hadoop clusters are not so large compared with traditional HPC platforms Intel Enterprise Edition for Lustre 2.0 includes the single thread I/O performance improvement patch Single thread workloads are improved, but also the general single client scalability. In this experiment we can achieve the full throughput (3.41GB/sec) of our cluster using DFSIO and only 8 maps tasks.

Benchmark results from Terasort In the chart a screenshot from IML of the aggregate throughput of Lustre during the Terasort benchmark We used 72 map tasks,144 reduce tasks and 500GB data size. No Lustre or Hadoop optimizations. The benchmark took 23 minutes to complete. The workload is challenging Lustre (50% READ and 50% WRITE) We achieved a peak performance of 3.9GB/sec (R: 2.2GB/sec W: 1.7GB/sec)

Local metadata caching bene fit in IEEL 2.0 I/O patterns for Hadoop applications are very different from traditional HPC often MANY metadata operations Metadata servers can become overloaded when run Hadoop jobs at scale New extended attributes caching improves metadata server performance During the previous terasort benchmark we collected the llite.*.stats from a client and the metadata activity received from the metadata server (mdt.*.md_stat), we can notice the huge bene fit of the metadata local caching: Single Client requests 9x Clients metadata activity expected Real metadata activity on the metadata server open 411954 3707586 4620535 close 411954 3707586 4620535 getattr 1851157 16660413 3239834 getxattr 52723133 474508197 1154874 unlink 128231 1154079 1154726

Conclusion We achieved the goal to clean install Cloudera Distribution for Hadoop and HAL We proved the ability to use with Hadoop all the throughput available from Lustre The new version of Intel Enteprise Edition for Lustre is optimized for Hadoop SSD and Lustre can sustain the atypical Hadoop workload Acknowledgements Servers and storage infrastructure, networking and hardware support was supplied by CSCS