Performance Metrics and Scalability Analysis. Performance Metrics and Scalability Analysis



Similar documents
Parallel Scalable Algorithms- Performance Parameters

Binary search tree with SIMD bandwidth optimization using SSE

Overlapping Data Transfer With Application Execution on Clusters

Performance metrics for parallel systems

Performance Monitoring of Parallel Scientific Applications

Performance metrics for parallelism

Building an Inexpensive Parallel Computer

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two

Architecture of Hitachi SR-8000

Unit 4: Performance & Benchmarking. Performance Metrics. This Unit. CIS 501: Computer Architecture. Performance: Latency vs.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang

Multi-Threading Performance on Commodity Multi-Core Processors

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

Quiz for Chapter 1 Computer Abstractions and Technology 3.10

on an system with an infinite number of processors. Calculate the speedup of

CHAPTER 1 INTRODUCTION

2: Computer Performance

Energy Efficient MapReduce

Application. Performance Testing

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Lecture 1: the anatomy of a supercomputer

32 K. Stride K 8 K 32 K 128 K 512 K 2 M

Types of Workloads. Raj Jain. Washington University in St. Louis

Performance evaluation

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

- An Essential Building Block for Stable and Reliable Compute Clusters

FPGA-based Multithreading for In-Memory Hash Joins

64-Bit versus 32-Bit CPUs in Scientific Computing

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Performance of the JMA NWP models on the PC cluster TSUBAME.

EE361: Digital Computer Organization Course Syllabus

Parallel Computing. Benson Muite. benson.

Scalability and Classifications

System Models for Distributed and Cloud Computing

Communicating with devices

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

CS 147: Computer Systems Performance Analysis

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

Parallel Programming Survey

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

Lecture 3: Evaluating Computer Architectures. Software & Hardware: The Virtuous Cycle?

Design and Implementation of a Storage Repository Using Commonality Factoring. IEEE/NASA MSST2003 April 7-10, 2003 Eric W. Olsen

Preview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Keys to node-level performance analysis and threading in HPC applications

Benchmarking Hadoop & HBase on Violin

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

OpenMP and Performance

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Chapter 2 Parallel Architecture, Software And Performance

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES

Performance Characteristics of Large SMP Machines

Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup

Comparison of Windows IaaS Environments

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.

Designing and Building Applications for Extreme Scale Systems CS598 William Gropp

Rackspace Cloud Databases and Container-based Virtualization

Q & A From Hitachi Data Systems WebTech Presentation:

SAN Conceptual and Design Basics

Program Optimization for Multi-core Architectures

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui

Week 1 out-of-class notes, discussions and sample problems

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti 2, Nidhi Rajak 3

Introduction to GPU hardware and to CUDA

Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu

Memory Architecture and Management in a NoC Platform

Capacity Estimation for Linux Workloads

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Symmetric Multiprocessing

Control 2004, University of Bath, UK, September 2004

CPU Performance. Lecture 8 CAP

SPARC64 VIIIfx: CPU for the K computer

Parallel and Distributed Computing Programming Assignment 1

Petascale Software Challenges. Piyush Chaudhary High Performance Computing

Cellular Computing on a Linux Cluster

Vorlesung Rechnerarchitektur 2 Seite 178 DASH

Mass Storage Structure

Oracle Database In-Memory The Next Big Thing

Power-Aware High-Performance Scientific Computing

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

CSEE W4824 Computer Architecture Fall 2012

The Green Index: A Metric for Evaluating System-Wide Energy Efficiency in HPC Systems

Scheduling Task Parallelism" on Multi-Socket Multicore Systems"

Transcription:

Performance Metrics and Scalability Analysis 1 Performance Metrics and Scalability Analysis Lecture Outline Following Topics will be discussed Requirements in performance and cost Performance metrics Work load and speed metrics Communication overhead measurement Analysis Theoretical concepts on Performance and Scalability Phase Parallel Model Speed up, Efficiency, Isoefficiency models Examples 2

Performance Versus Cost Questions to be answered What are the user s requirement in performance and cost? How one should measure the performance of application programs? What kind of performance metrics should be used? What are the factors (parameters) affecting the performance? How can one determine the scalability of a parallel computer executing a given application? 3 How do we measure the performance of a computer system? Many people believe that execution time is the only reliable metric to measure computer performance Approach Run the user s application time and measure wall clock time elapsed Remarks Performance Versus Cost This approach is some times difficult to apply and it could permit misleading interpretations. Pitfalls of using execution time as performance metric. Execution time alone does not give the user much clue to a true performance of the parallel machine 4

Performance Requirements Types of performance requirement Six types of performance requirements are posed by users: Executive time and throughput Processing speed System throughput Utilization Cost effectiveness Performance / Cost ratio Remarks : These requirements could lead to quite different conclusions for the same application on the same computer platform 5 Performance Requirements Performance versus cost Execution time is critical to some applications A real time application, the user cares about whether the job is guaranteed to finish within a time limit Example : Radar signal processing application Processing Speed For many applications, the users may be interested in achieving a certain processing speed rather than an execution time limit Example : Radar signal processing application 6

Performance Requirements System Throughput Throughput is defined to be the number of jobs processed in a unit time Example : STAP Benchmarks, TPC Benchmarks Remark : Execution time is not only performance requirement, but other performance requirement is needed. 7 Performance Requirements Utilization and Cost Effectiveness Instead of searching for the shortest execution time, the user may want to run his application more cost effectively High percentage of the CPU hours to be used for useful computing Reduction of time spent in load imbalance and communication overheads 8

Performance Requirements Utilization and Cost Effectiveness A good indicator of cost-effectiveness is the utilization factor which is ratio of the achieved speed to the peak speed of a given computer. Performance/Cost Ratio Performance/cost ratio of a computer system is defined as the ratio of the speed to the purchasing price. 9 Performance Requirements Remarks Higher Utilization corresponds to higher Gflop/s per dollar, provided if CPU-hours are changed at a fixed rate. A low utilization always indicates a poor program or compiler. Good program could have a long execution time due to a large workload, or a low speed due to a slow machine. Utilization factor varies from 5% to 38%. Generally the utilization drops as more nodes are used. Utilization values generated from the vendor s benchmark programs are often highly optimized. 10

Performance Versus Cost The misleading peak performance / cost ratio Using the peak performance/cost ratio to compare system is often misleading Example : The peak performance cost ratio of some supercomputers is much lower than those of others. But its sustained performance is actually higher. Often, users need to use more than one metric in comparing different parallel computing system The cost-effectiveness measure should not be confused with the performance/cost ratio of a computer system If we use the cost-effectiveness or performance / cost ratio alone, the current Pentium PC easily beat more powerful systems 11 Performance Versus Cost Summary The execution time is just one of several performance requirements a user may want to impose. The other include speed, throughput, utilization, costeffectiveness, performance/cost ratio, and scalability. Usually, a set or requirements may be imposed. 12

Workload and Speed Metrics Workload and speed metrics Three metrics are frequently used to measure the computational workload of a program The execution time : The first metric is tied to a specific computer system. It may change when the C program is executed on a different machine. The number of instructions executed : The second metric, tied to an instruction set architecture (ISA), stays unchanged when the program is executed on different machines with same ISA. The number of floating-point operations executed : The third metric is often architecture-independent. 13 Workload and Speed Metrics Workload and Speed Metrics Workload Type Workload Unit Speed Unit Execution time Seconds (s), CPU clocks Application per second Instruction count Floating-point operation (flop) count Summary of all three performance metrics Million instructions or billion instruction Flop, Million flop (Mflop), billion flop (Gflop) MIPS or BIPS Mflop/s Gflop/s 14

Instruction count Workload and Speed Metrics The work load is the instructions that the machines executed, not just the number of instructions in assembly program text. Instruction count may depend on the input data values. For such an input dependent program, the workload is defined as the instruction count for the worst-case input. Example A sorting program may execute 100000 instructions for a given input, but only 4000 for another. For such an input-dependent program, the workload is defined as the instruction count for the worst case input. 15 Workload and Speed Metrics Instruction count Even with a fixed input, the instruction executed could be different on different machines. Even with a fixed input, the instruction count of a program could be different on the same machine when different compilers or optimizations are used. Finally, a larger instruction count does not necessarily mean the program needs more time to execute. 16

Workload and Speed Metrics Execution time For a given program on a specific computer system, one can define the workload as the total time taken to execute the program.this execution time should be measured in terms of wall clock time, also known as the elapsed time. Execution time depends on many factors explained below. The basic unit of workload is seconds. 17 Workload and Speed Metrics Choice of right kind of algorithm Execution time Data structure Input data Platform Language The machine hardware and operating system affect performance 18

Workload and Speed Metrics Algorithm The algorithm used has a great input on execution time Execution Time Data structure Input Platform Language How the data are structured also impacts performance The execution time for many applications does not depend on the input data. For instance, an N-point FFT will take 0(N log N) time. Other programs, such as sorting and searching could take different times on different input data. When using execution time as the workload for such input-dependent programs, we need to use either the worst-case time or the time for baseline input data set. Machine hardware and operating system affect performance. Other factors include the memory hierarchy (cache, main memory, disk), operating system versions. Language used to code the application, compiler/linker options/ and library functions reduce the execution time. 19 Workload and Speed Metrics Floating point count When application program is simple and its workload is not inputdependent, the workload can be determined by code inspection. When the application code is complex, or when the workload varies with different input data (e.g., sorting) one can measure the workload by running the application on a specific machine. This specific run will generate a report of the number of flops or instructions actually executed. This approach has been used in the NAS benchmark, where the flop count is determined by an execution run time. 20

Workload and Speed Metrics Rules for Counting Floating-Point Operations (NAS standards) Operations Flop Count Comments on Rules A[2*1] = B[ j -1]+1.5*C - 2; 3 Add, subtract, or multiply each count as 1 flop Index arithmetic not counted Assignment not separately counted X = Y; 1 An isolated assignment is counted as 1 flop If (X>Y) Max = 2.0*X; 2 A comparison is counted as 1 flop X = (float) i + 3.0; 2 A type conversion is counted as 1 flop X = Y / 3.0 + sqrt(z) 9 A division or square root is counted as 4 flop X = sin(y) - exp(z); 17 A sine, exponential, etc. is counted as 8 flop 21 Caveats Workload and Speed Metrics Using execution time or instruction count as the workload metric leads to a strange phenomenon. The workload changes when the same application is run on different systems. Workload can be determined only by executing the program. The flop count and the instruction count have another advantage: their corresponding speed and utilization measures give some clue as to how well the application is implemented. Highly tuned programs could achieve more. It is possible to achieve 90% utilization in some numerical subroutines. 22

Workload and Speed Metrics Summary of performance metrics To sum up, all three metrics are useful, but especially the flop count and the execution time. In practice, the flop count workload is often determined by code inspection, as described in the table. The execution time is often measured on a specific machine, under a set of specific listing conditions including hardware platform, compiler options, and input data set etc. Remark : The flop count metric is more stable. 23 Computational Characteristics of Parallel Computers Computational Characteristics The historical values of performance parameters for PVP, SMP, MPPs and Clusters are Year of release in Market Clock Rate (MHz) Memory Capacity Machine Size Peak performance per node (Mflop/sec) Peak performance on n ( > 1) nodes (Gflop/sec) 24

Computational Characteristics of Parallel Computers The Memory Hierarchy For each layer, the following three parameters play a vital role for extracting performance Capacity (C) Latency (L) : How many bytes of data can be held by the device? : How much time is needed to fetch one from the device? Bandwidth (B) : For moving a large amount of data, how many bytes can be transferred from the device in a second? 25 Computational Characteristics of Parallel Computers The Memory Hierarchy The performance depends on how fast the system can move data between processors and memories. The memory subsystem hierarchy is shown in the figure. 1-100 GB Remote 100-100 K cycles Memory 1-300 MB/s Registers C < 2KB L = 0 cycle B = 1-32 GB/s Level-1 Cache 4-256 KB 0-2 cycles 1-16 GB/s Level-2 Cache 64 KB-4 MB 2-10 cycles 1-4 GB/s Main memory 16 MB-16 GB 10-100 cycles 0.4-2 GB/s Performance parameters of a typical memory hierarchy Disk 1-100 GB 100K-1M cycles 1-16 MB/s 26

Computational Characteristics of Parallel Computers The faster and smaller devices are closer to the processor. The devices closest to the processor are the registers, which are in fact part of the processor chip. The level - 1 cache is usually on the processor chip and the level 2 cache is off chip. The level 3 cache is off chip shared by some processors in SMP node. The main memory includes the local memory in a node, and the global memory for the machines with a centralized shared memory. 27 Parallelism and Interaction Overheads The time to execute a parallel program is T = T comp +T par +T interact where T comp, T par and T interact are the times needed to execute the computation, the parallelism, and the interaction operations, respectively. (Assume that interaction overheads are ignored, and no overlapping is involved). 28

Parallelism and Interaction Overheads Source of Parallelism overhead Process management, such as creation, termination, context switching, etc. Grouping operations, such as creation or destruction of a process group Process inquiry operations, such as asking for process identification, rank, group, identification, group size, etc. Parallelism Overheads Creating a process or a group is expensive on parallel computers. This is important for the parallel program which is executed many times to process raw data. 29 Parallelism and Interaction Overheads (Contd ) Sources of interaction overheads Synchronization, such as barrier, locks critical regions, and events Aggregation, such as reduction and scan Communication, such as point-to-point and collective communication and reading/writing of shared variables Idleness due to Interaction 30

Parallelism and Interaction Overheads Overheads Quantification Measurement conditions Measurement methods Expressions for overhead parallelism due to communications Point-to-Point Communication Communication overhead for message length m (in bytes) (requirement of startup time, asymptotic bandwidth) Collective communication and Collective computation 31 Parallelism and Interaction Overheads Overhead measurement conditions The following conditions are used to estimate the overhead involved. The data structures used (The data structures used are always made small enough to fit into the node memory so that there will not be page faults) The programming language used. The best complier options should be used. The message passing library used. The communication hardware and protocol used. To use wall clock time elapsed. 32

Parallelism and Interaction Overheads Overhead measurement Methods Measuring point-point communication (ping-pong test) between two nodes Measuring point-point communication involving n nodes and collective communication performance Interpretation of overhead measurement data Method 1 : Present the results in tabular form Method 2 : Present the data as curve Method 3 : Present the data using simple, closed-form expression that can be used to estimate the overhead for various message lengths and machine size. 33 Parallelism and Interaction Overheads Remarks Knowing the overheads can help programmer decide how to best develop a parallel program on a given parallel computer. Parallelism and interaction overheads are often large compared to the basic computation time, and they vary from a system to system. In some cases parallelism and interaction overheads are significant and they should be quantified. The overhead values also vary greatly from one system to another. 34

Performance Metrics : Phase Parallel Model Phase Parallel model : Analysis Consider a sequential program C consisting of a sequence of K major component computational phases C 1, C 2, C 3... C K. We want to develop an efficient parallel program C by exploiting each phase C i with a Degree Of Parallelism DOP i. A phase parallel program is depicted in the following figure Phase C 1 : W 1 T 1 (1) DOP 1 Interaction Phase C i : W i T 1 (i) DOP i Interaction The phase parallel model of an application algorithm Phase C k : W k T 1 (k) DOP k 35 Performance Metrics: Phase Parallel Model Phase Parallel model : Basic Metrics While executing on p processors with 1 p DOP i, we have The parallel execution time for step Ci is Tp(i)=T1(i) / p, The total parallel execution time over p nodes becomes T n = Σ 1 i k T 1 (i) + min (DOP i, n) T par + T interact T par and T interact denote all parallelism and interaction overheads. 36

Performance Metrics: Phase Parallel Model Remark 1. In many cases, the total overhead is dominated by the communication overhead. 2. The implementation of point-to-point and collective communication and computation overhead for various message sizes in MPI play a important role for this overhead. 3. These metrics will help the Application User to know about performance issues of a particular code. 4. These inequalities are useful to estimate the parallel execution time 37 Performance Metrics: Phase Parallel Model Phase parallel model Notation Terminology Definition T 1 Sequential time T 1 = Σ T 1 (i) 1 i k T p T Parallel time, p-node time Critical path T p = Σ 1 i k T 1 (i) + T par + T interact min (DOP i, p) T = Σ T 1 (i) 1 i k DOP i S p Speedup S p = T 1 / T p P p p-node speed P p = W / T p E p p-node efficiency E p = S p / p = T 1 / (pt p ) T 0 Total overhead T 0 = T par + T interact 38

Performance Metrics of Parallel Systems Speedup : Speedup T p is defined as the ratio of the serial runtime of the best sequential algorithm for solving a problem to the time taken by the parallel algorithm to solve the same problem on p processor The p processors used by the parallel algorithm are assumed to be identical to the one used by the sequential algorithm Cost : Cost of solving a problem on a parallel system is the product of parallel runtime and the number of processors used E = p.s p 39 Performance Metrics of Parallel Systems Efficiency : Ratio of speedup to the number of processors. Efficiency can also be expressed as the ratio of the execution time of the fastest known sequential algorithm for solving a problem to the cost of solving the same problem on p processors The cost of solving a problem on a single processor is the execution time of the known best sequential algorithm Cost Optimal : A parallel system is said to be cost-optimal if the cost of solving a problem on parallel computer is proportional to the execution time of the fastest known sequential algorithm on a single processor. 40

Performance Metrics of Parallel Systems Cost optimal parallel system has an efficiency of θ (1) Cost is sometimes referred to as work or processor-time product and a cost-optimal systems is also known as PT P optimal system Using fewer than the maximum number of processors to execute a parallel algorithm is called scaling down a parallel system in terms of number of processors 41 A Ideal Scalability metric should have the following two properties 1. It predicts the workload growth rate with respect to the increase of machine size. 2. It is consistent with execution time, i.e., a more scalable system always has a shorter execution time. Remark : Scalability and Speedup Analysis It can be shown that under very reasonable conditions, a system with a small isoutlization (thus more scalable) always has a shorter execution time. Systems with a small isoutilization are more scalable than those with a large one. 42

Conclusions Summary of Performance metrics Performance metrics and measurement of performance of a parallel program have been explained. Work load and speed metrics are explained Types of overhead in parallel computing are discussed. Theoretical concepts on Performance and Scalability Phase Parallel model, Speed up, Efficiency, Isoefficiency, Isospeed and Isoutilization metrics are explained. 43 References 1. Ernst L. Leiss, Parallel and Vector Computing A practical Introduction, McGraw-Hill Series on Computer Engineering, Newyork (1995). 2. Albert Y.H. Zomaya, Parallel and distributed Computing Handbook, McGraw-Hill Series on Computing Engineering, Newyork (1996). 3. Vipin Kumar, Ananth Grama, Anshul Gupta, George Karypis, Introduction to Parallel Computing, Design and Analysis of Algorithms, Redwood City, CA, Benjmann/Cummings (1994). 4. William Gropp, Rusty Lusk, Tuning MPI Applications for Peak Performance, Pittsburgh (1996) 5. Ian T. Foster, Designing and Building Parallel Programs, Concepts and tools for Parallel Software Engineering, Addison-Wesley Publishing Company (1995). 6. Kai Hwang, Zhiwei Xu, Scalable Parallel Computing (Technology Architecture Programming) McGraw Hill Newyork (1997) 7. Culler David E, Jaswinder Pal Singh with Anoop Gupta, Parallel Computer Architecture, A Hardware/Software Approach, Morgan Kaufmann Publishers, Inc, (1999) 44

45