Improving HPC applications scheduling with predictions based on automatically-collected historical data



Similar documents
Rodrigo Fernandes de Mello, Evgueni Dodonov, José Augusto Andrade Filho

Chapter 2: Getting Started

Utilization Driven Power-Aware Parallel Job Scheduling

High Performance Computing. Course Notes HPC Fundamentals

Optimizing Shared Resource Contention in HPC Clusters

SLURM Workload Manager

Mesos: A Platform for Fine- Grained Resource Sharing in Data Centers (II)

Delivering Quality in Software Performance and Scalability Testing

Contributions to Gang Scheduling

High Performance Computing in CST STUDIO SUITE

Bernie Velivis President, Performax Inc

Performance Monitoring of Parallel Scientific Applications

IBM Software Testing and Development Control - How to Measure Risk

Directions for VMware Ready Testing for Application Software

System Models for Distributed and Cloud Computing

Matlab on a Supercomputer

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical

Job Scheduling in HPC systems. Outline. Definitions 03/12/2007. Julita Corbalan EEAP

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers

A Framework For Application Performance Understanding and Prediction

The Importance of Software License Server Monitoring

A Job Self-Scheduling Policy for HPC Infrastructures

Windows Server Performance Monitoring

Large system usage HOW TO. George Magklaras PhD Biotek/NCMM IT USIT Research Computing Services

Overlapping Data Transfer With Application Execution on Clusters

Power and Energy aware job scheduling techniques

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010

How to control Resource allocation on pseries multi MCM system

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

MONITORING PERFORMANCE IN WINDOWS 7

QEx WHITEPAPER. Increasing Cost Predictability in Performance Testing Services via Unit-Based Pricing Model.

ICS Principles of Operating Systems

Table of Contents INTRODUCTION Prerequisites... 3 Audience... 3 Report Metrics... 3

Energy-aware job scheduler for highperformance

Microsoft SQL Server OLTP Best Practice

STEPPING TOWARDS A NOISELESS LINUX ENVIRONMENT

BridgeWays Management Pack for VMware ESX

Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju

Cloud Computing through Virtualization and HPC technologies

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays

Tableau Server 7.0 scalability

The Art of Workload Selection

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

Characterizing Task Usage Shapes in Google s Compute Clusters

Dynamic Resource Distribution Across Clouds

Performance Testing of Java Enterprise Systems

The Association of System Performance Professionals

Audit & Tune Deliverables

LCMON Network Traffic Analysis

Operating System: Scheduling

LSKA 2010 Survey Report Job Scheduler

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Methodology for predicting the energy consumption of SPMD application on virtualized environments *

technical brief Optimizing Performance in HP Web Jetadmin Web Jetadmin Overview Performance HP Web Jetadmin CPU Utilization utilization.

Performance And Scalability In Oracle9i And SQL Server 2000

The Role of Load Balancing and High Performance Process Schematics

Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs

Performance Analysis of Web based Applications on Single and Multi Core Servers

Intel DPDK Boosts Server Appliance Performance White Paper

Using esxtop to Troubleshoot Performance Problems

benchmarking Amazon EC2 for high-performance scientific computing

SpW-10X Network Performance Testing. Peter Mendham, Jon Bowyer, Stuart Mills, Steve Parkes. Space Technology Centre University of Dundee

Newsletter 4/2013 Oktober

SP Apps Performance test Test report. 2012/10 Mai Au

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li

Energy-Efficient Application-Aware Online Provisioning for Virtualized Clouds and Data Centers

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Job Scheduling in a Distributed System Using Backfilling with Inaccurate Runtime Computations

LS DYNA Performance Benchmarks and Profiling. January 2009

Parallel Scalable Algorithms- Performance Parameters

Big Data Evaluator 2.1: User Guide

ClearCube White Paper Best Practices Pairing Virtualization and Centralization Increasing Performance for Power Users with Zero Client End Points

Running VirtualCenter in a Virtual Machine

ArcGIS for Server Performance and Scalability: Testing Methodologies. Andrew Sakowicz, Frank Pizzi,

Energy Constrained Resource Scheduling for Cloud Environment

CS 147: Computer Systems Performance Analysis

Oracle Hyperion Financial Management Virtualization Whitepaper

Long-Term Resource Fairness

Monitoring Infrastructure for Superclusters: Experiences at MareNostrum

Communication Protocol

The Top 20 VMware Performance Metrics You Should Care About

SIDN Server Measurements

Cloud Management: Knowing is Half The Battle

IBM Tivoli Monitoring Version 6.3 Fix Pack 2. Infrastructure Management Dashboards for Servers Reference

Copyright 1

A High Performance Computing Scheduling and Resource Management Primer

Black-box and Gray-box Strategies for Virtual Machine Migration

Object Database Scalability for Scientific Workloads

THE IMPACT OF RUNTIME METRICS COLLECTION ON ADAPTIVE MOBILE APPLICATIONS

vrealize Operations Manager Customization and Administration Guide

Configuring and Monitoring the Client Desktop Component

New trends in IT transformation and IT performance monitoring

GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems

Resource Scheduling Best Practice in Hybrid Clusters

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

Performance Testing. Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as:

How To Test On The Dsms Application

Transcription:

Improving HPC applications scheduling with predictions based on automatically-collected historical data Carlos Fenoy García carles.fenoy@bsc.es September 2014

Index 1 Introduction Introduction Motivation Objectives 2 Proposed Solution HPerf 3 Evaluation Overhead analysis Applications characterization and workload generation 4 Tests Description Workload High 5 Conclusions 2 / 27

Introduction Introduction Introduction Supercomputers have increased their resource number in last years Resource management has become critical to maximise its usage due to resource sharing The evolution pace of different components is not equal on all the parts: Memory bandwidth evolves slower than processors speed. Existing resource selection mechanisms only focus on CPUS and Memory. Other limiting resources exist Interconnect bandwidth Memory bandwidth 3 / 27

Introduction Introduction Introduction Previous work exist to solve the memory bandwidth management issue. The PhD thesis by Francesc Guim introduces the Less Consume selection policy, that considers the memory bandwidth as a new resource. Less Consume was later validated, porting the policy to a real system and analysing its behaviour. However, this port was done on MareNostrum 2, a different arquitecture to the one currently available in MareNostrum 3. This policy requires users to specify the amount of memory bandwidth needed by each job. 4 / 27

Introduction Motivation Motivation Users do not necessarily know the resources used by their applications. The impact of sharing resources on the applications perfomance. Improvement of monitoring systems enables the collection of multiple metrics per job. The lack of trusty mechanisms for specifying resources. 5 / 27

Introduction Objectives Objectives Port previous work to a new arquitecture (MareNostrum 3). Enhance an existing resource selection policy with the usage of historical data to predict the resource usage by jobs. Aware of shared resources (memory bandwidth) Historical data collected by a transparent monitoring system Data is used in job scheduling Analyse the behaviour and the benefits of the policy Compare the policy with other existing policies 6 / 27

Proposed Solution Index 1 Introduction Introduction Motivation Objectives 2 Proposed Solution HPerf 3 Evaluation Overhead analysis Applications characterization and workload generation 4 Tests Description Workload High 5 Conclusions 7 / 27

Proposed Solution HPerf Historical Performance Data (HPerf) The system proposed predicts the resources usage of an application based on monitoring information gathered from previous equivalent executions. This system uses a user-provided tag to identify the kind of job. Combines existing technologies to improve the scheduling of applications: Resource selection policy aware of memory bandwidth (Less Consume) Monitoring system able to collect per job information (PerfMiner) Scheduler and resource management (Slurm) 8 / 27

Proposed Solution HPerf Consumable Resources (ConsRes) 9 / 27

Proposed Solution HPerf Less Consume (LessC) 9 / 27

Proposed Solution HPerf Historical Performance data (HPerf) If at the submission time the tag is found, the average of the resource usage will be used as the resource requirements for the job. If it is not found, the application will run with exclusive execution to favour the monitoring. 9 / 27

Evaluation Index 1 Introduction Introduction Motivation Objectives 2 Proposed Solution HPerf 3 Evaluation Overhead analysis Applications characterization and workload generation 4 Tests Description Workload High 5 Conclusions 10 / 27

Evaluation System analysis In order be able to analyse the system, the following previous work was done: PerfMiner overhead analysis. Applications characterization. Study of the resources used by an application. Study of the time elapsed per iteration for each application. Workload generation: A workload generator was chosen due to the unfeasibility of using real production applications. 11 / 27

Evaluation Overhead analysis PerfMiner study Overhead analysis of PerfMiner with two sets of 100 jobs. (CG class D with 64 tasks) 12 / 27

Evaluation Applications characterization and workload generation Workload generation: applications characterization Applications used for the generation of the workload characterized by their use of memory bandwidth: High CG class D Synthetic application with high memory bandwidth usage Medium CG class C Low CG class B 13 / 27

Evaluation Applications characterization and workload generation Workload generation The Lublin-Feitelson model was used to generate the workload. Generates 100 jobs Provides: Number of cpus (2-64) Job duration Job arrival time Combining the workload with the applications list 2 final workloads were obtained: Medium High high 54 84 medium 27 7 low 19 9 The time limit used for the jobs of the workload was the calculated with the application running standalone. 14 / 27

Tests Index 1 Introduction Introduction Motivation Objectives 2 Proposed Solution HPerf 3 Evaluation Overhead analysis Applications characterization and workload generation 4 Tests Description Workload High 5 Conclusions 15 / 27

Tests Description Description Both workloads (HIGH & MEDIUM) were run with 3 different scheduling policies: ConsRes: Default Slurm scheduling policy. Considers only cpus and memory. LessC: Less Consume policy aware of the memory bandwidth usage per application. HPerf: Less Consume + Historical Perfomance data. Automatically gets the jobs resource consumption. With empty database. With preloaded database. For each policy tests were run with: Unlimited grace time. Limited grace time. Kills the jobs after 5 minutes of the time limit. 16 / 27

Tests Workload High HIGH: Unlimited time CR HPerf DB 17 / 27

Tests Workload High HIGH: Limited grace time CR HPerf DB 18 / 27

Tests Workload High Finishing status Unlimited Limited 19 / 27

Tests Workload High Slowdown Unlimited Limited 20 / 27

Tests Workload High Waiting time Unlimited Limited 21 / 27

Tests Workload High CPU usage Unlimited Limited 22 / 27

Tests Workload High Useful CPU usage 23 / 27

Tests Workload High Job run time predictability Unlimited Limited 24 / 27

Conclusions Index 1 Introduction Introduction Motivation Objectives 2 Proposed Solution HPerf 3 Evaluation Overhead analysis Applications characterization and workload generation 4 Tests Description Workload High 5 Conclusions 25 / 27

Conclusions Conclusions The usage of the monitoring data for job scheduling results in better allocation that: Improves the applications performance. Increases the useful cpu usage. Reduces the shared resources overload. Increases the waiting time. Avoids users providing information they may not know. Requires better mechanisms to measure the memory bandwidth to increase the solution performance. 26 / 27

Improving HPC applications scheduling with predictions based on automatically-collected historical data Carlos Fenoy García carles.fenoy@bsc.es September 2014