Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research
|
|
|
- Annabel Hicks
- 10 years ago
- Views:
Transcription
1 Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY Phone: Office: B1-109 cdc at ccr.buffalo.edu January 2013
2 CCR The Center for Computational Research is the supercomputing facility for the University at Buffalo. CCR s mission is threefold: Research Education Outreach
3 Cluster Resources 32 computers 12 cores with 48GB memory and 2 GPUs 256 computers 8-cores with 24GB memory 372 computers 12-cores with 48GB memory 16 computers 32-cores with 256GB memory 2 computers 32-cores with 512GB memory COMING SOON core machines Remote Visualization System
4 Cluster Resources 505TB of networked attached storage Backup of home and projects directories High performance Infiniband fabric network more on cluster resources Command line interface Batch scheduling access to cluster machines
5 Data Storage Permanent Data Storage Home directory 2GB /user/username Back up Projects directory 200GB 500GB /projects/groupname Group space Requested by Academic Research Advisor Back up Temporary Data Storage Panasas directory /panasas/scratch 215TB Files older than 2 weeks are automatically deleted Available on all nodes Scratch directory /scratch 256GB 3TB Local to compute node Files deleted at end of job
6 Accessing the Cluster Login Machine The CCR cluster front-end machine is u2.ccr.buffalo.edu 32-core node with 256GB of memory Login to cluster Transfer data files to and from cluster data storage Command line Linux environment Compile code, edit files and submit jobs to cluster Compute nodes and storage spaces are not directly accessible from outside the cluster. Small computations on login machine Accessible from UB network only LAN or UB Secure Wireless on campus Use UB VPN from off campus to connect to UB network
7 Accessing the Cluster From Windows Machines Login XWin-32 program Allow graphical display back to user s workstation/laptop PuTTY secure shell program File transfer Filezilla or WinSCP are graphical file transfer programs scp/sftp available in the PuTTY package UB VPN for off-campus access More on: Xwin-32 Filezilla WinSCP UB VPN User Guide
8 Accessing the Cluster From Linux and MAC Machines Login ssh ssh -X Allow graphical display back to user s workstation/laptop File transfer Filezilla is a graphical file transfer program sftp or scp UB VPN for off-campus access More on: ssh/sftp/scp UB VPN for Linux UB VPN for MAC User Guide
9 Command Line List the contents of a directory: ls -l Displays ownership and permissions. Show details and hidden files: ls -la Show details in reverse time order: ls -latr Show current directory pathname: pwd Create a directory: mkdir dirname Change directory: cd /pathname Change to one directory above: cd.. Change to two directories above: cd../.. Change to home directory: cd Remove a file: rm filename Remove a directory: rm -R dirname
10 Command Line All directories and files have permissions. Users can set the permissions to allow or deny access to their files and directories. Permissions are set for the user (u), the group (g) and everyone else (o). The permissions are read, write and execute. drwxrwxrwx -rwxrwxrwx Read (r) permission allows viewing and copying. Write (w) permission allows changing and deleting. Execute (x) permission allows execution of a file and access to a directory. Show permissions: ls -l The left most is for a file and d for a directory. The next three are read, write and execute for the user (owner).
11 Command Line The middle three are read, write and execute for the group. These permissions set the access for all members of the group. The rightmost three are read, write and execute for other. Change permissions with the chmod command. chmod ugo+rwx filename chmod ugo-rwx filename ugo is user, group and other rwx is read, write and execute + add a permission - removes a permission Adding write permission for group: chmod g+w filename Removing all permissions for group and other on a directory and its contents: chmod -R go-rwx dirname
12 Command Line The asterisk is a wild card for many UNIX commands. List all C files: ls *.c Files edited on Windows machines can contain hidden characters, which may cause runtime problems. Use the file command to show the type of file. If the file is a DOS file, then use the dos2unix command to remove any hidden characters. Viewing files: cat, more or less cat filename displays a file to the screen more filename displays a file to the screen with page breaks less filename allows paging forward and back There are several editors available. emacs, gedit, nano and vi
13 Command Line Files edited on Windows machines can contain hidden characters, which may cause runtime problems. Use the file command to show the type of file. If the file is a DOS file, then use the dos2unix command to remove any hidden characters. file filename dos2unix filename Manual pages are available for UNIX commands. man ls Change password using the MyCCR web interface More on MyCCR See the CCR user guide for more information. more on the User Guide CCR Command Line Reference Card
14 Modules Most applications installed on the cluster have a module file that sets paths and variables. Loading a module adds the path for the application to path variables and sets variables. Unloading a module removes the path and unsets the variables. List available modules module avail module avail intel Show what a module will do module show modulename Load a module module load modulename Unload a module module unload modulename
15 Modules List modules currently loaded module list Users can create custom modules to point to their own software installations. Creating your own module. Create the privatemodules directory in your home directory. Copy a module to the privatemodules directory. Edit the module. Using your own module. Load the use.own module. module load use.own Load the module. module load yourmodule-name
16 Running on the Cluster Computations are submitted to the PBS (Portable Batch System) scheduler as jobs to run on the cluster. Jobs can be script or interactive. The compute nodes are assigned to jobs by the scheduler. Users are not permitted to use compute nodes outside of the scheduled job. The qsub command is used to submit a job to the cluster. Specify the number and type of nodes. -lnodes=2:ib2:ppn=8 Requests 2 compute nodes and 8 processors per node. Total of 16 processors. The IB2 tag specifies the 8-core nodes.
17 Running on the Cluster -lnodes=4:ib1:ppn=12 Requests 4 compute nodes and 12 processors per node. Total of 48 cores. The IB1 tag specifies the 12-core nodes. The PBS tags IB1 and IB2 stand for InfiniBand fabrics (networks) 1 and 2, corresponding to 12- core and 8-core nodes. Specify the length of time. This is the walltime of the job. The maximum allowed walltime is 72 hours. Most jobs will run in the default queue. There is no need to specify this queue. Using the -I flag on the command line requests an interactive job. more on cluster nodes and PBS tags
18 Running on the Cluster The cluster has a debug queue for quick testing of codes. The maximum walltime is 1 hour. There are 2 8-core nodes, 2 12-core nodes, and 1 GPU node. Use the -q debug flag to request the debug queue. Submit the job. qsub pbs-script Show job status with the qstat command. qstat -an -u username qstat jobid A job will have a queued or run state. Jobs in a run state have a node list. The cores are indicated in the node list.
19 Running on the Cluster The first node in the head node of the job. The qdel command will delete a job. qdel jobid The showstart command will give an estimated start time for a job. showstart jobid The start time is estimated based on the current queue. This value can change. The showq command with display the queue status. The running jobs are shown first, followed by the queued or idle jobs. showq pipe is helpful with showq: showq more The showbf command will show nodes that currently available. showbf -S showbf -S more
20 Hadoop on the Cluster There are two methods of running a Hadoop computation on the cluster. The first is to use the MyHadoop scripts to set up and run the Hadoop computation. The second is an interactive job. We will look at this option later. Using the MyHadoop scripts: Change to working directory. Copy the pbsmyhadoop, hadoop examples.jar and README.txt files from the /util/myhadoop/sample directory. cp /util/myhadoop/sample/pb smyhadoop. cp /util/myhadoop/sample/ha doop examples.jar.
21 Hadoop on the Cluster cp /util/myhadoop/sample/re ADME.txt. Edit the pbsmyhadoop file. emacs pbsmyhadoop Submit the job. qsub pbsmyhadoop Show the job in the queue. qstat -an -u username qstat -an jobid The config directory is created in the working directory. The output file will be in this directory when the job completes. Monitor the job the ccrjobviz.pl command. ccrjobviz.pl jobid
22 Submitting a Job
23 Running a Job
24 Running a Job
25 Running a Job
26 pbsmyhadoop Script page 1 #!/bin/bash #PBS -l nodes=4:ib1:ppn=12 #PBS -l walltime=00:30:00 #PBS -M [email protected] #PBS -m e #PBS -N test #PBS -o hadoop_run.out #PBS -j oe cd $PBS_O_WORKDIR... specify bash shell node request time request address on job exit name of job output file stdout and stderr go to same output file change to the directory from which the job was submitted.
27 pbsmyhadoop Script page 2 echo "working directory = "$PBS_O_WORKDIR. $MODULESHOME/init/sh module load myhadoop/0.2a/hadoop echo "MY_HADOOP_HOME="$MY_HADOOP_HOME echo "HADOOP_HOME="$HADOOP_HOME #### Set this to the directory where Hadoop configs should be generated # Don't change the name of this variable (HADOOP_CONF_DIR) as it is # required by Hadoop - all config files will be picked up from here # # Make sure that this is accessible to all nodes export HADOOP_CONF_DIR=$PBS_O_WORKDIR/config echo "MyHadoop config directory="$hadoop_conf_dir
28 pbsmyhadoop Script page 2 Display the working directory. Initialization modules. Load the module file for myhadoop. Display the value of the $MY_HADOOP_HOME variable. Display the value of the $HADOOP_HOME variable. Set the $HADOOP_CONF_DIR to the config directory in the working directory. Display the value of the $HADOOP_CONF_DIR variable.
29 pbsmyhadoop Script page 3 ### Set up the configuration # Make sure number of nodes is the same as what you have requested from PBS # usage: $MY_HADOOP_HOME/bin/pbs-configure.sh -h echo "Set up the configurations for myhadoop" # this is the non-persistent mode NNuniq=`cat $PBS_NODEFILE uniq wc -l` echo "Number of nodes in nodelist="$nnuniq $MY_HADOOP_HOME/bin/pbs-configure.sh -n $NNuniq -c $HADOOP_CONF_DIR sleep 15
30 pbsmyhadoop Script page 3 Create a unique node list using the $PBS_NODEFILE variable. The $PBS_NODEFILE variable contains a list of the compute nodes assigned to the job with each node listed as many times as cores on the node. `cat $PBS_NODEFILE uniq wc -l ` means display the nodefile and filter it through the unique command, then count the number of lines that are output Run the configure script. Wait for 15 seconds.
31 pbsmyhadoop Script page 4 # this is the persistent mode # $MY_HADOOP_HOME/bin/pbs-configure.sh -n 4 -c $HADOOP_CONF_DIR -p -d /oasis/clo udstor-group/hdfs #### Format HDFS, if this is the first time or not a persistent instance echo "Format HDFS" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR namenode -format sleep 15 echo "start dfs" $HADOOP_HOME/bin/start-dfs.sh sleep 15
32 pbsmyhadoop Script page 4 Format the HDFS on the compute node. The HDFS (Hadoop Distributed File System) is created using the local disk of each compute node running the job. Wait 15 seconds. Start the dfs. Wait 15 seconds.
33 pbsmyhadoop Script page 5 echo "copy file to dfs" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR dfs -put README.txt / sleep 15 echo "ls files in dfs" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR dfs -ls / echo "start jobtracker (mapred)" $HADOOP_HOME/bin/start-mapred.sh echo "ls files in dfs" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR fs -ls /
34 pbsmyhadoop Script page 5 Copy file to dfs. Wait 15 seconds. List the files in dfs. Start the job tracker. List the files in dfs.
35 pbsmyhadoop Script page 6 echo "run computation" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR jar hadoop examples.jar wordcount /README.txt /output echo "ls files in dfs" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR fs -ls /output echo "view output results" $HADOOP_HOME/bin/hadoop --config $HADOOP_CONF_DIR fs - cat /output/part-r echo "stop jobtracker (mapred)" $HADOOP_HOME/bin/stop-mapred.sh echo "stop dfs" $HADOOP_HOME/bin/stop-dfs.sh
36 pbsmyhadoop Script page 6 Run the computation. List the files in dfs. View the results. Stop the job tracker. Stop the dfs.
37 pbsmyhadoop Script page 7 #### Clean up the working directories after job completion echo "Clean up" $MY_HADOOP_HOME/bin/pbs-cleanup.sh -n $NNuniq echo
38 pbsmyhadoop Script page 7 Run the cleanup script.
39 Monitoring a Job ccrjobviz.pl is a graphical display used to monitor and evaluate a running job. ccrjobviz.pl jobid The top command can used to monitor the processes on a node. Use qstat -an -u username or qstat -an jobid ssh into the node: ssh nodename Run top on the node: top The first node in the list is the head node of the job, which contains the standard output and error file while the job running. ssh into the node: ssh nodename cd /var/spool/pbs/spool The output file will have the jobid as the file name. cat jobid
40 Monitoring a Job
41 Monitoring a Job
42 Hadoop on the Cluster The second method of running a Hadoop computation is to use an interactive job. Submit an interactive job. qsub -I -lnodes=4:ib2:ppn=8 -lwalltime=03:00:00 Once the job starts, configure Hadoop by hand and run the computation. This will take more time, however you will learn the specific configuration steps necessary for the hadoop computation. Here are the instructions: Running Hadoop - CCR
43 Submitting an Interactive Job
44 User Support Online Resources: New User Information User Guide Tutorials and Presentations Help: CCR staff members are available to assist you. UB Faculty and student can request individual and group training sessions. more on CCR Help
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria
Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew
Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.
Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction
The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - -
The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - - Hadoop Implementation on Riptide 2 Table of Contents Executive
Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014
Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan
Getting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
Miami University RedHawk Cluster Working with batch jobs on the Cluster
Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.
HPC at IU Overview. Abhinav Thota Research Technologies Indiana University
HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is
Hodor and Bran - Job Scheduling and PBS Scripts
Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.
NEC HPC-Linux-Cluster
NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores
Introduction to SDSC systems and data analytics software packages "
Introduction to SDSC systems and data analytics software packages " Mahidhar Tatineni ([email protected]) SDSC Summer Institute August 05, 2013 Getting Started" System Access Logging in Linux/Mac Use available
Hadoop Hands-On Exercises
Hadoop Hands-On Exercises Lawrence Berkeley National Lab Oct 2011 We will Training accounts/user Agreement forms Test access to carver HDFS commands Monitoring Run the word count example Simple streaming
Agenda. Using HPC Wales 2
Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software
New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry
New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware
Beyond Windows: Using the Linux Servers and the Grid
Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux
Quick Tutorial for Portable Batch System (PBS)
Quick Tutorial for Portable Batch System (PBS) The Portable Batch System (PBS) system is designed to manage the distribution of batch jobs and interactive sessions across the available nodes in the cluster.
An Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
Job Scheduling with Moab Cluster Suite
Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. [email protected] 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..
研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1
102 年 度 國 科 會 雲 端 計 算 與 資 訊 安 全 技 術 研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊 Version 0.1 總 計 畫 名 稱 : 行 動 雲 端 環 境 動 態 群 組 服 務 研 究 與 創 新 應 用 子 計 畫 一 : 行 動 雲 端 群 組 服 務 架 構 與 動 態 群 組 管 理 (NSC 102-2218-E-259-003) 計
How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (
Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)
Grid 101. Grid 101. Josh Hegie. [email protected] http://hpc.unr.edu
Grid 101 Josh Hegie [email protected] http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging
Hadoop Hands-On Exercises
Hadoop Hands-On Exercises Lawrence Berkeley National Lab July 2011 We will Training accounts/user Agreement forms Test access to carver HDFS commands Monitoring Run the word count example Simple streaming
PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007
PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit
Using the Yale HPC Clusters
Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: [email protected] Read documentation at: http://research.computing.yale.edu/hpc-support
HOD Scheduler. Table of contents
Table of contents 1 Introduction... 2 2 HOD Users... 2 2.1 Getting Started... 2 2.2 HOD Features...5 2.3 Troubleshooting... 14 3 HOD Administrators... 21 3.1 Getting Started... 22 3.2 Prerequisites...
1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology
Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science
SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015
Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians
Using Parallel Computing to Run Multiple Jobs
Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The
Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])
NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
Cisco Networking Academy Program Curriculum Scope & Sequence. Fundamentals of UNIX version 2.0 (July, 2002)
Cisco Networking Academy Program Curriculum Scope & Sequence Fundamentals of UNIX version 2.0 (July, 2002) Course Description: Fundamentals of UNIX teaches you how to use the UNIX operating system and
Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)
Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:
Batch Scripts for RA & Mio
Batch Scripts for RA & Mio Timothy H. Kaiser, Ph.D. [email protected] 1 Jobs are Run via a Batch System Ra and Mio are shared resources Purpose: Give fair access to all users Have control over where jobs
Tutorial Guide to the IS Unix Service
Tutorial Guide to the IS Unix Service The aim of this guide is to help people to start using the facilities available on the Unix and Linux servers managed by Information Services. It refers in particular
Setup Hadoop On Ubuntu Linux. ---Multi-Node Cluster
Setup Hadoop On Ubuntu Linux ---Multi-Node Cluster We have installed the JDK and Hadoop for you. The JAVA_HOME is /usr/lib/jvm/java/jdk1.6.0_22 The Hadoop home is /home/user/hadoop-0.20.2 1. Network Edit
Manual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
Single Node Hadoop Cluster Setup
Single Node Hadoop Cluster Setup This document describes how to create Hadoop Single Node cluster in just 30 Minutes on Amazon EC2 cloud. You will learn following topics. Click Here to watch these steps
Kognitio Technote Kognitio v8.x Hadoop Connector Setup
Kognitio Technote Kognitio v8.x Hadoop Connector Setup For External Release Kognitio Document No Authors Reviewed By Authorised By Document Version Stuart Watt Date Table Of Contents Document Control...
Introduction to Hadoop on the SDSC Gordon Data Intensive Cluster"
Introduction to Hadoop on the SDSC Gordon Data Intensive Cluster" Mahidhar Tatineni SDSC Summer Institute August 06, 2013 Overview "" Hadoop framework extensively used for scalable distributed processing
The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.
Lab 9: Hadoop Development The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications. Introduction Hadoop can be run in one of three modes: Standalone
To connect to the cluster, simply use a SSH or SFTP client to connect to:
RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or
HPC system startup manual (version 1.30)
HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download
Tutorial 0A Programming on the command line
Tutorial 0A Programming on the command line Operating systems User Software Program 1 Program 2 Program n Operating System Hardware CPU Memory Disk Screen Keyboard Mouse 2 Operating systems Microsoft Apple
SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
Introduction to HPC Workshop. Center for e-research ([email protected])
Center for e-research ([email protected]) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using
CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment
CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment James Devine December 15, 2008 Abstract Mapreduce has been a very successful computational technique that has
Hadoop (pseudo-distributed) installation and configuration
Hadoop (pseudo-distributed) installation and configuration 1. Operating systems. Linux-based systems are preferred, e.g., Ubuntu or Mac OS X. 2. Install Java. For Linux, you should download JDK 8 under
Introduction to Cloud Computing
Introduction to Cloud Computing Qloud Demonstration 15 319, spring 2010 3 rd Lecture, Jan 19 th Suhail Rehman Time to check out the Qloud! Enough Talk! Time for some Action! Finally you can have your own
Hadoop Installation MapReduce Examples Jake Karnes
Big Data Management Hadoop Installation MapReduce Examples Jake Karnes These slides are based on materials / slides from Cloudera.com Amazon.com Prof. P. Zadrozny's Slides Prerequistes You must have an
Streamline Computing Linux Cluster User Training. ( Nottingham University)
1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running
Cluster@WU User s Manual
Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut
How To Write A Mapreduce Program On An Ipad Or Ipad (For Free)
Course NDBI040: Big Data Management and NoSQL Databases Practice 01: MapReduce Martin Svoboda Faculty of Mathematics and Physics, Charles University in Prague MapReduce: Overview MapReduce Programming
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
Martinos Center Compute Clusters
Intro What are the compute clusters How to gain access Housekeeping Usage Log In Submitting Jobs Queues Request CPUs/vmem Email Status I/O Interactive Dependencies Daisy Chain Wrapper Script In Progress
Parallel Debugging with DDT
Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece
HPCC USER S GUIDE. Version 1.2 July 2012. IITS (Research Support) Singapore Management University. IITS, Singapore Management University Page 1 of 35
HPCC USER S GUIDE Version 1.2 July 2012 IITS (Research Support) Singapore Management University IITS, Singapore Management University Page 1 of 35 Revision History Version 1.0 (27 June 2012): - Modified
How To Install Hadoop 1.2.1.1 From Apa Hadoop 1.3.2 To 1.4.2 (Hadoop)
Contents Download and install Java JDK... 1 Download the Hadoop tar ball... 1 Update $HOME/.bashrc... 3 Configuration of Hadoop in Pseudo Distributed Mode... 4 Format the newly created cluster to create
Extreme computing lab exercises Session one
Extreme computing lab exercises Session one Michail Basios ([email protected]) Stratis Viglas ([email protected]) 1 Getting started First you need to access the machine where you will be doing all
HDFS Installation and Shell
2012 coreservlets.com and Dima May HDFS Installation and Shell Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized Hadoop training courses
Hadoop Shell Commands
Table of contents 1 FS Shell...3 1.1 cat... 3 1.2 chgrp... 3 1.3 chmod... 3 1.4 chown... 4 1.5 copyfromlocal...4 1.6 copytolocal...4 1.7 cp... 4 1.8 du... 4 1.9 dus...5 1.10 expunge...5 1.11 get...5 1.12
Linux command line. An introduction to the Linux command line for genomics. Susan Fairley
Linux command line An introduction to the Linux command line for genomics Susan Fairley Aims Introduce the command line Provide an awareness of basic functionality Illustrate with some examples Provide
A. Aiken & K. Olukotun PA3
Programming Assignment #3 Hadoop N-Gram Due Tue, Feb 18, 11:59PM In this programming assignment you will use Hadoop s implementation of MapReduce to search Wikipedia. This is not a course in search, so
HPCC - Hrothgar Getting Started User Guide
HPCC - Hrothgar Getting Started User Guide Transfer files High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents Transferring files... 3 1.1 Transferring files using
Extreme computing lab exercises Session one
Extreme computing lab exercises Session one Miles Osborne (original: Sasa Petrovic) October 23, 2012 1 Getting started First you need to access the machine where you will be doing all the work. Do this
Ra - Batch Scripts. Timothy H. Kaiser, Ph.D. [email protected]
Ra - Batch Scripts Timothy H. Kaiser, Ph.D. [email protected] Jobs on Ra are Run via a Batch System Ra is a shared resource Purpose: Give fair access to all users Have control over where jobs are run Set
Hadoop Shell Commands
Table of contents 1 DFShell... 3 2 cat...3 3 chgrp...3 4 chmod...3 5 chown...4 6 copyfromlocal... 4 7 copytolocal... 4 8 cp...4 9 du...4 10 dus... 5 11 expunge... 5 12 get... 5 13 getmerge... 5 14 ls...
Unix Sampler. PEOPLE whoami id who
Unix Sampler PEOPLE whoami id who finger username hostname grep pattern /etc/passwd Learn about yourself. See who is logged on Find out about the person who has an account called username on this host
RHadoop and MapR. Accessing Enterprise- Grade Hadoop from R. Version 2.0 (14.March.2014)
RHadoop and MapR Accessing Enterprise- Grade Hadoop from R Version 2.0 (14.March.2014) Table of Contents Introduction... 3 Environment... 3 R... 3 Special Installation Notes... 4 Install R... 5 Install
High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda
High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics
sftp - secure file transfer program - how to transfer files to and from nrs-labs
last modified: 2014-01-29 p. 1 CS 111 - useful details The purpose of this handout is to summarize several details you will need for this course: 1. sftp - how to transfer files to and from nrs-labs 2.
Command Line - Part 1
Command Line - Part 1 STAT 133 Gaston Sanchez Department of Statistics, UC Berkeley gastonsanchez.com github.com/gastonstat Course web: gastonsanchez.com/teaching/stat133 GUIs 2 Graphical User Interfaces
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
Installing Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.
Big Data Computing Instructor: Prof. Irene Finocchi Master's Degree in Computer Science Academic Year 2013-2014, spring semester Installing Hadoop Emanuele Fusco ([email protected]) Prerequisites You
HADOOP CLUSTER SETUP GUIDE:
HADOOP CLUSTER SETUP GUIDE: Passwordless SSH Sessions: Before we start our installation, we have to ensure that passwordless SSH Login is possible to any of the Linux machines of CS120. In order to do
WinSCP PuTTY as an alternative to F-Secure July 11, 2006
WinSCP PuTTY as an alternative to F-Secure July 11, 2006 Brief Summary of this Document F-Secure SSH Client 5.4 Build 34 is currently the Berkeley Lab s standard SSH client. It consists of three integrated
Command Line Crash Course For Unix
Command Line Crash Course For Unix Controlling Your Computer From The Terminal Zed A. Shaw December 2011 Introduction How To Use This Course You cannot learn to do this from videos alone. You can learn
HDFS File System Shell Guide
Table of contents 1 Overview...3 1.1 cat... 3 1.2 chgrp... 3 1.3 chmod... 3 1.4 chown... 4 1.5 copyfromlocal...4 1.6 copytolocal...4 1.7 count... 4 1.8 cp... 4 1.9 du... 5 1.10 dus...5 1.11 expunge...5
Secure Shell. The Protocol
Usually referred to as ssh The name is used for both the program and the protocol ssh is an extremely versatile network program data encryption and compression terminal access to remote host file transfer
File System Shell Guide
Table of contents 1 Overview...3 1.1 cat... 3 1.2 chgrp... 3 1.3 chmod... 3 1.4 chown... 4 1.5 copyfromlocal...4 1.6 copytolocal...4 1.7 count... 4 1.8 cp... 5 1.9 du... 5 1.10 dus...5 1.11 expunge...6
Introduction to MSI* for PubH 8403
Introduction to MSI* for PubH 8403 Sep 30, 2015 Nancy Rowe *The Minnesota Supercomputing Institute for Advanced Computational Research Overview MSI at a Glance MSI Resources Access System Access - Physical
Unix Guide. Logo Reproduction. School of Computing & Information Systems. Colours red and black on white backgroun
Logo Reproduction Colours red and black on white backgroun School of Computing & Information Systems Unix Guide Mono positive black on white background 2013 Mono negative white only out of any colou 2
MapReduce. Tushar B. Kute, http://tusharkute.com
MapReduce Tushar B. Kute, http://tusharkute.com What is MapReduce? MapReduce is a framework using which we can write applications to process huge amounts of data, in parallel, on large clusters of commodity
University of Toronto
1 University of Toronto APS 105 Computer Fundamentals A Tutorial about UNIX Basics Fall 2011 I. INTRODUCTION This document serves as your introduction to the computers we will be using in this course.
Hadoop 2.6.0 Setup Walkthrough
Hadoop 2.6.0 Setup Walkthrough This document provides information about working with Hadoop 2.6.0. 1 Setting Up Configuration Files... 2 2 Setting Up The Environment... 2 3 Additional Notes... 3 4 Selecting
Introduction to Supercomputing with Janus
Introduction to Supercomputing with Janus Shelley Knuth [email protected] Peter Ruprecht [email protected] www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?
Using the Yale HPC Clusters
Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Dec 2015 To get help Send an email to: [email protected] Read documentation at: http://research.computing.yale.edu/hpc-support
Parallel Processing using the LOTUS cluster
Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS
Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters
CONNECT - Lab Guide Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters Hardware, software and configuration steps needed to deploy Apache Hadoop 2.4.1 with the Emulex family
cloud-kepler Documentation
cloud-kepler Documentation Release 1.2 Scott Fleming, Andrea Zonca, Jack Flowers, Peter McCullough, El July 31, 2014 Contents 1 System configuration 3 1.1 Python and Virtualenv setup.......................................
Introduction to Sun Grid Engine (SGE)
Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems
Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014
1. How to Get An Account CACR Accounts 2. How to Access the Machine Connect to the front end, zwicky.cacr.caltech.edu: ssh -l username zwicky.cacr.caltech.edu or ssh [email protected] Edits,
IBM Software Hadoop Fundamentals
Hadoop Fundamentals Unit 2: Hadoop Architecture Copyright IBM Corporation, 2014 US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
HSearch Installation
To configure HSearch you need to install Hadoop, Hbase, Zookeeper, HSearch and Tomcat. 1. Add the machines ip address in the /etc/hosts to access all the servers using name as shown below. 2. Allow all
24/08/2004. Introductory User Guide
24/08/2004 Introductory User Guide CSAR Introductory User Guide Introduction This material is designed to provide new users with all the information they need to access and use the SGI systems provided
Lab 1 Beginning C Program
Lab 1 Beginning C Program Overview This lab covers the basics of compiling a basic C application program from a command line. Basic functions including printf() and scanf() are used. Simple command line
