How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (



Similar documents
Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

An Introduction to High Performance Computing in the Department

SGE Roll: Users Guide. Version Edition

Grid Engine Users Guide p1 Edition

Streamline Computing Linux Cluster User Training. ( Nottingham University)

User s Manual

Manual for using Super Computing Resources

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

Introduction to the SGE/OGS batch-queuing system

Grid 101. Grid 101. Josh Hegie.

CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment

研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1

The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - -

Hadoop MultiNode Cluster Setup

Enigma, Sun Grid Engine (SGE), and the Joint High Performance Computing Exchange (JHPCE) Cluster

Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

GRID Computing: CAS Style

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

Introduction to Sun Grid Engine (SGE)

Installing and running COMSOL on a Linux cluster

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters

Grid Engine 6. Policies. BioTeam Inc.

MapReduce, Hadoop and Amazon AWS

Getting Started with HPC

Parallel Processing using the LOTUS cluster

Batch Job Analysis to Improve the Success Rate in HPC

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

CycleServer Grid Engine Support Install Guide. version 1.25

HPCC USER S GUIDE. Version 1.2 July IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35

Introduction to SDSC systems and data analytics software packages "

Cluster Computing With R

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

The Asterope compute cluster

High Performance Compute Cluster

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Grid Engine 6. Troubleshooting. BioTeam Inc.

Miami University RedHawk Cluster Working with batch jobs on the Cluster

The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.

Using the Yale HPC Clusters

The RWTH Compute Cluster Environment

High Performance Computing

HSearch Installation

Beyond Windows: Using the Linux Servers and the Grid

Data Analytics. CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL. All rights reserved.

HDFS to HPCC Connector User's Guide. Boca Raton Documentation Team

AstroCompute. AWS101 - using the cloud for Science. Brendan Bouffler ( boof ) Scientific Computing AWS. ska-astrocompute@amazon.

Configuration of High Performance Computing for Medical Imaging and Processing. SunGridEngine 6.2u5

Setup Hadoop On Ubuntu Linux. ---Multi-Node Cluster

Hadoop (pseudo-distributed) installation and configuration

Hodor and Bran - Job Scheduling and PBS Scripts

Single Node Hadoop Cluster Setup

Hadoop Installation. Sandeep Prasad

NEC HPC-Linux-Cluster

Bright Cluster Manager 5.2. Administrator Manual. Revision: Date: Fri, 27 Nov 2015

High-Performance Reservoir Risk Assessment (Jacta Cluster)

Introduction to Supercomputing with Janus

USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2

Submitting Jobs to the Sun Grid Engine. CiCS Dept The University of Sheffield.

HOD Scheduler. Table of contents

HPCC - Hrothgar Getting Started User Guide MPI Programming

Hadoop Data Warehouse Manual

A Study of Data Management Technology for Handling Big Data

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

HIPAA Compliance Use Case

Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box

Clusters in the Cloud

Quick Deployment Step-by-step instructions to deploy Oracle Big Data Lite Virtual Machine

Single Node Setup. Table of contents

How To Write A Mapreduce Program On An Ipad Or Ipad (For Free)

User Manual: Using Hadoop with WS-PGRADE. workflow.

Big Data Operations Guide for Cloudera Manager v5.x Hadoop

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

NIST/ITL CSD Biometric Conformance Test Software on Apache Hadoop. September National Institute of Standards and Technology (NIST)

Centrify Server Suite For MapR 4.1 Hadoop With Multiple Clusters in Active Directory

MapReduce. Tushar B. Kute,

How To Install Hadoop From Apa Hadoop To (Hadoop)

Notes on the SNOW/Rmpi R packages with OpenMPI and Sun Grid Engine

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Getting help - guide to the ticketing system. Thomas Röblitz, UiO/USIT/UAV/ITF/FI ;)

Set JAVA PATH in Linux Environment. Edit.bashrc and add below 2 lines $vi.bashrc export JAVA_HOME=/usr/lib/jvm/java-7-oracle/

Hadoop Basics with InfoSphere BigInsights

Hadoop Lab - Setting a 3 node Cluster. Java -

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

A. Aiken & K. Olukotun PA3

CactoScale Guide User Guide. Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB)

CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

Hadoop 2.6 Configuration and More Examples

Apache Hadoop new way for the company to store and analyze big data

Parallel Debugging with DDT

TP1: Getting Started with Hadoop

Hadoop on the Gordon Data Intensive Cluster

Data management on HPC platforms

Transcription:

Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013

TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique) Used jointly by Inria teams Jobs are run with the help of a scheduler SGE (Sun Grid Engine) 2

TomPouce cluster SPECIFICATIONS: Calcula@on: 20 nodes > bi- processors > 6 cores. Total: 240 cores 48 Gb Ram per node Local space 400 Gb Storage: Dell R510 /home 19 Tb NFS Dell R710 x2 /scratch 37 Tb FHGFS (Fraunhofer FS) Network: Switch Dell 5548 Switch infiniband Mellanox InfiniScale IV QDR 3

TomPouce cluster 4

1. Copy your job from the local machine to the cluster front node $ scp myjob.jar inria_username@195.83.212.209:~/ myjob.jar will be copied in the folder /home/leo/inria_username. 5

2. Connect via ssh to front node $ ssh inria_username@195.83.212.209 Welcome to Bright Cluster Manager 6.0 Based on Scientific Linux release 6 Cluster Manager ID: #120054 Use the following commands to adjust your environment: 'module avail ' - show available modules 'module add <module> ' - adds a module to your environment for this session 'module initadd <module> ' - configure module to be loaded at every login IMPORTANT: To connect to the cluster, you ssh key should be stored in the Inria LDAP. If not, send an e- mail with your public ssh key to: helpmi- saclay@inria.fr 6

3. Log in as clustervision superuser using your LDAP password $ sudo su - clustervision - To execute Hadoop and Stratosphere jobs and edit configura@ons needed. - If you don t have enough permissions, ask for them to: helpmi- saclay@inria.fr 7

4. Add Hadoop/Stratosphere environment to your session To add Hadoop environment, type: $module add hadoop/1.1.1 To add Stratosphere environment, type: $module add stratosphere/stratosphere - To add an environment automa@cally when you login: $module initadd hadoop/1.1.1 - To check all the environments loaded: $ module list Currently Loaded Modulefiles: 1) gcc/4.7.0 2) intel-cluster-checker/1.8 3) stratosphere/stratosphere-0.2.1 4) sge/2011.11 5) openmpi/gcc/64/1.4.5 6) gromacs/openmpi/gcc/64/4.0.7 7) hadoop/1.1.1 8

4. Add Hadoop/Stratosphere environment to your session Hadoop installa@on: /cm/shared/apps/hadoop/current/ Stratosphere installa@on: /cm/shared/apps/stratosphere/current/ 9

5. Create an executon script (Hadoop) #/bin/bash #$ -N hadoop_run #$ -pe hadoop 12 #$ -j y #$ -o output.$job_id #$ -l h_rt=00:10:00,hadoop=true,excl=true #$ -cwd #$ -q hadoop.q #Copy the input files into the HDFS filesystem hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/clustervision/tmp /input #Running the hadoop task(s) here. I am specifying the jar, class, run parameters: hadoop --config /home/guests/clustervision/current/ jar myjob.jar org.myorg.job /input /output # Copying the output files from the HDFS filesystem hadoop --config /home/guests/clustervision/current/ fs get /output 10

5. Create an executon script (Hadoop) #/bin/bash #$ -N hadoop_run #$ -pe hadoop 12 #$ -j y #$ -o output.$job_id #$ -l h_rt=00:10:00,hadoop=true,excl=true #$ -cwd #$ -q hadoop.q #Copy the input files into the HDFS filesystem hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/clustervision/tmp /input #Running the hadoop task(s) here. I am specifying the jar, class, run parameters: hadoop --config /home/guests/clustervision/current/ jar myjob.jar org.myorg.job /input /output # Copying the output files from the HDFS filesystem hadoop --config /home/guests/clustervision/current/ fs get /output 11

SGE executon parameters: Should be wrigen aher #$ at the beginning of the script. - N <job_name>. Used to give a name to the job to run. - pe <environment> N. Specifies the environment. N is the number of cores (limited to 180). - j y : to use the same output file (errors and standard exit). 12

SGE executon parameters: - o output.$job_id: the standard output will be in a file name ouput.$job_id. $JOB_ID will be the number SGE will assign automa@cally to our job. - l name=value. Used to demand a resource. In this case: h_rt=00:10:00 indicates that the job should be killed aher 10 minutes hadoop=true indicates that the job to run is a Hadoop job (it DOES NOT CHANGE for Stratosphere jobs) excl=true indicates that it is executed exclusively 13

5. Create an executon script (Hadoop) HADOOP COMMANDS Copy input files into HDFS hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/clustervision/tmp /input Run Hadoop tasks hadoop --config /home/guests/clustervision/current/ jar /pathtojob/myjob.jar org.myorg.job /input /output Copy output files from HDFS hadoop --config /home/guests/clustervision/current/ fs get /output 14

5. Create an executon script (Hadoop) HADOOP COMMANDS Copy input files into HDFS hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/clustervision/tmp /input Run Hadoop tasks hadoop --config /home/guests/clustervision/current/ jar /pathtojob/myjob.jar org.myorg.job /input /output Copy output files from HDFS hadoop --config /home/guests/clustervision/current/ fs get /output 15

5. Create an executon script (Stratosphere): #/bin/bash #$ -N strato_run #$ -pe stratosphere 24 #$ -j y #$ -o output.$job_id #$ -l h_rt=00:10:00,hadoop=true,excl=true #$ -cwd #$ -q hadoop.q export PATH=$PATH:'/cm/shared/apps/hadoop/current/conf/' export STRATOSPHERE_HOME='/cm/shared/apps/stratosphere/current MASTER=`cat /home/guests/clustervision/current/masters` hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/ clustervision/tmp /var/hadoop/dfs.name.dir $STRATOSPHERE_HOME/bin/pact-client.sh run -j myjob.jar -a 2 hdfs://$master:50040/var/hadoop/ dfs.name.dir/inputfile hdfs://$master:50040/var/hadoop/dfs.name.dir/outputfile hadoop --config /home/guests/clustervision/current/ fs -get /var/hadoop/dfs.name.dir/output 16

5. Create an executon script (Stratosphere): #/bin/bash #$ -N strato_run #$ -pe stratosphere 24 #$ -j y #$ -o output.$job_id #$ -l h_rt=00:10:00,hadoop=true,excl=true #$ -cwd #$ -q hadoop.q export PATH=$PATH:'/cm/shared/apps/hadoop/current/conf/' export STRATOSPHERE_HOME='/cm/shared/apps/stratosphere/current MASTER=`cat /home/guests/clustervision/current/masters` hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/ clustervision/tmp /input $STRATOSPHERE_HOME/bin/pact-client.sh run -j myjob.jar -a 2 hdfs://$master:50040/input hdfs://$master:50040/output hadoop --config /home/guests/clustervision/current/ fs -get /output 17

5. Create an executon script (Stratosphere): #/bin/bash #$ -N strato_run #$ -pe stratosphere 24 #$ -j y #$ -o output.$job_id #$ -l h_rt=00:10:00,hadoop=true,excl=true #$ -cwd #$ -q hadoop.q export PATH=$PATH:'/cm/shared/apps/hadoop/current/conf/' export STRATOSPHERE_HOME='/cm/shared/apps/stratosphere/current MASTER=`cat /home/guests/clustervision/current/masters` hadoop --config /home/guests/clustervision/current/ dfs -copyfromlocal /home/guests/ clustervision/tmp /input $STRATOSPHERE_HOME/bin/pact-client.sh run -j myjob.jar -a 2 hdfs://$master:50040/input hdfs://$master:50040/output hadoop --config /home/guests/clustervision/current/ fs -get /output 18

5. Create an executon script (Stratosphere) STRATOSPHERE COMMANDS Copy input files into HDFS hadoop --config /home/guests/clustervision/current/ dfs - copyfromlocal /home/guests/clustervision/tmp /input Run Stratosphere tasks $STRATOSPHERE_HOME/bin/pact-client.sh run -j /pathtojob/myjob.jar -a 2 hdfs://$master:50040/input hdfs://$master:50040/output Copy output files from HDFS hadoop --config /home/guests/clustervision/current/ fs -get /output 19

6. Submission of a job To submit, execute: $qsub script.qsub Aher submission, you can see the state of execu@on with the command: $ qstat job-id prior name user state submit/start at queue slots ja-task-id ----------------------------------------------------------------------------------------------------------------- 159048 0.60500 strato_run clustervisio r 10/15/2013 23:17:59 hadoop.q@node011.cm.cluster 24 20

6. Submission of a job Or if you want a more detailed informa@on: $qstat t 21

7. Logs /home/guests/clustervision/output.$job_id: Output of the job execu@on in SGE /home/guests/clustervision/config.$job_id/logs: Logs of Hadoop file system. 22