EECS 583 Class 11 Instruction Scheduling Software Pipelining Intro



Similar documents
Software Pipelining - Modulo Scheduling

AN IMPLEMENTATION OF SWING MODULO SCHEDULING WITH EXTENSIONS FOR SUPERBLOCKS TANYA M. LATTNER

Module: Software Instruction Scheduling Part I

Administration. Instruction scheduling. Modern processors. Examples. Simplified architecture model. CS 412 Introduction to Compilers

VLIW Processors. VLIW Processors

PROBLEMS #20,R0,R1 #$3A,R2,R4


Architectures and Platforms

INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER

Solution: start more than one instruction in the same clock cycle CPI < 1 (or IPC > 1, Instructions per Cycle) Two approaches:

Software Pipelining by Modulo Scheduling. Philip Sweany University of North Texas

Goals of the Unit. spm adolfo villafiorita - introduction to software project management

PROBLEMS. which was discussed in Section

CPU Organization and Assembly Language

GameTime: A Toolkit for Timing Analysis of Software

Q. Consider a dynamic instruction execution (an execution trace, in other words) that consists of repeats of code in this pattern:

Static Scheduling. option #1: dynamic scheduling (by the hardware) option #2: static scheduling (by the compiler) ECE 252 / CPS 220 Lecture Notes

Instruction Set Design

UNIVERSITY OF CALIFORNIA, DAVIS Department of Electrical and Computer Engineering. EEC180B Lab 7: MISP Processor Design Spring 1995

Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1

(Refer Slide Time: 00:01:16 min)

Solutions. Solution The values of the signals are as follows:

(Refer Slide Time: 02:39)

Pipelining Review and Its Limitations

A Comparison Of Shared Memory Parallel Programming Models. Jace A Mogill David Haglin

picojava TM : A Hardware Implementation of the Java Virtual Machine

Scalability and Classifications

System Interconnect Architectures. Goals and Analysis. Network Properties and Routing. Terminology - 2. Terminology - 1

IA-64 Application Developer s Architecture Guide

Annotation to the assignments and the solution sheet. Note the following points

Operating Systems. Virtual Memory

FLIX: Fast Relief for Performance-Hungry Embedded Applications

CS521 CSE IITG 11/23/2012

Graph Database Proof of Concept Report

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University

Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two

Project Time Management

Synchronization. Todd C. Mowry CS 740 November 24, Topics. Locks Barriers


Instruction scheduling

Operating System Overview. Otto J. Anshus

Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere!

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Introduction to Data-flow analysis

Computer Architecture TDTS10

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Register File, Finite State Machines & Hardware Control Language

AN IMPLEMENTATION OF SWING MODULO SCHEDULING WITH EXTENSIONS FOR SUPERBLOCKS TANYA M. LATTNER. B.S., University of Portland, 2000 THESIS

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

Aperiodic Task Scheduling

VDI Solutions - Advantages of Virtual Desktop Infrastructure

50 Computer Science MI-SG-FLD050-02

CCPM: TOC Based Project Management Technique

Software Pipelining. for (i=1, i<100, i++) { x := A[i]; x := x+1; A[i] := x

UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS

Scheduling Shop Scheduling. Tim Nieberg

WAR: Write After Read

CSE 141L Computer Architecture Lab Fall Lecture 2

Architecture of Hitachi SR-8000

Introduction to Cloud Computing

Chapter-6. Developing a Project Plan

CUDA Optimization with NVIDIA Tools. Julien Demouth, NVIDIA

Parallel and Distributed Computing Programming Assignment 1

CPU Organisation and Operation

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes

Processor Architectures

CS352H: Computer Systems Architecture

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Addressing The problem. When & Where do we encounter Data? The concept of addressing data' in computations. The implications for our machine design(s)

Real Time Scheduling Basic Concepts. Radek Pelánek

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

Computer organization

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA

EE282 Computer Architecture and Organization Midterm Exam February 13, (Total Time = 120 minutes, Total Points = 100)

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Collaborative Scheduling using the CPM Method

2) Write in detail the issues in the design of code generator.

Parallel Computing. Benson Muite. benson.

Theorem (informal statement): There are no extendible methods in David Chalmers s sense unless P = NP.

Dynamic programming formulation

Reminder: Complexity (1) Parallel Complexity Theory. Reminder: Complexity (2) Complexity-new

GPU Computing with CUDA Lecture 4 - Optimizations. Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile

Hardware Acceleration for Just-In-Time Compilation on Heterogeneous Embedded Systems

CLASS: Combined Logic Architecture Soft Error Sensitivity Analysis

Transcription:

EECS 58 Class Instruction Scheduling Software Pipelining Intro University of Michigan October 8, 04

Announcements & Reading Material Reminder: HW Class project proposals» Signup sheet available next Weds in class» Think about partners/topic! Today and next Wednesday s class» Iterative Modulo Scheduling: An Algorithm for Software Pipelining Loops, B. Rau, MICRO-7, 994, pp. 6-74. Monday is Fall break - -

Sample Project Ideas Memory system» Data cache prefetching» Instruction cache prefetching» Data layout for improved cache behavior» Memory contention (ReQoS [ASPLOS ], compiling for niceness [CGO ]) Reliability/HW variability» AVF profiling, vulnerability analysis» Selective code duplication for soft error protection» Low-cost fault recovery» Runtime system for variability - -

Sample Project Ideas (Dynamic Compiler) Traditional Dynamic Compiler» Dynamo [PLDI 00], DynamoRio, Pin [PLDI 05]» Run-time program analysis for security» Run-time classic optimization (if they don t have it already!) Light-weight Dynamic Compiler» Protean Code [Micro 4], Scenario-based optimization [CGO 09]» Runtime auto-parallelization» Runtime instruction scheduling» Runtime data layout optimization Denver/Transmeta style Architecture» Binary translation coupled with hardware (Block chop [ISCA ]» Example: Deactivate/power gating parts of processor (FUs, registers, cache) - -

Sample Project Ideas (Approximate Computing)» Trading accuracy for performance» Example: loop perforations» Other compilation techniques. SAGE [MICRO ], Paraprox [ASPLOS 4]» Problems: what to approximate, estimate errors, error bound, maximize performance gain, etc - 4 -

More Project Ideas Compile for GPU/GPU-like architecture» Auto-optimization for different HW configurations» Reduce branch divergence, etc Applying machine learning to compilation problems» Identify hot paths.» Identify good optimization combinations» auto-parallelism [PDLI 09] Compiler Analysis» Accelerator design» Parallelism opportunities - 5 -

And Yet a Few More Program analysis for security» Efficient taint tracking Profiling tools» Distributed control flow/energy profiling (profiling + aggregation) Misc» Program distillation - create a subset program with equivalent memory/branch behavior Remember, don t be constrained by my suggestions, you can pick other topics! - 6 -

From Last Time Class Problem machine model latencies add: mpy: load: sync store: sync. Draw dependence graph. Label edges with type and latencies. r = load(r). r = r +. store (r8, r) 4. r = load(r) 5. r4 = r * r 6. r5 = r5 + r4 7. r = r6 + 4 8. store (r, r5) - 7 -

Dependence Graph Properties - Estart Estart = earliest start time, (as soon as possible - ASAP)» Schedule length with infinite resources (dependence height)» Estart = 0 if node has no predecessors» Estart = MAX(Estart(pred) + latency) for each predecessor node» Example 4 5 7 6 8-8 -

Lstart Lstart = latest start time, ALAP» Latest time a node can be scheduled s.t. sched length not increased beyond infinite resource schedule length» Lstart = Estart if node has no successors» Lstart = MIN(Lstart(succ) - latency) for each successor node» Example 4 5 7 6 8-9 -

Slack Slack = measure of the scheduling freedom» Slack = Lstart Estart for each node» Larger slack means more mobility» Example 4 5 6 7 8-0 -

Critical Path Critical operations = Operations with slack = 0» No mobility, cannot be delayed without extending the schedule length of the block» Critical path = sequence of critical operations from node with no predecessors to exit node, can be multiple crit paths 4 5 6 7 8 - -

Class Problem 5 7 9 4 6 8 Node Estart Lstart Slack 4 5 6 7 8 9 Critical path(s) = - -

Operation Priority Priority Need a mechanism to decide which ops to schedule first (when you have multiple choices) Common priority functions» Height Distance from exit node Ÿ Give priority to amount of work left to do» Slackness inversely proportional to slack Ÿ Give priority to ops on the critical path» Register use priority to nodes with more source operands and fewer destination operands Ÿ Reduces number of live registers» Uncover high priority to nodes with many children Ÿ Frees up more nodes» Original order when all else fails - -

Height-Based Priority Height-based is the most common» priority(op) = MaxLstart Lstart(op) + 0, 8,, 4 5 0, 0 6 4, 4 6, 6 9 4, 7 7, 7 8, 8 0 7 0, 5 op priority 4 5 6 7 8 9 0-4 -

List Scheduling (aka Cycle Scheduler) Build dependence graph, calculate priority Add all ops to UNSCHEDULED set time = - while (UNSCHEDULED is not empty)» time++» READY = UNSCHEDULED ops whose incoming dependences have been satisfied» Sort READY using priority function» For each op in READY (highest to lowest priority) Ÿ op can be scheduled at current time? (are the resources free?) Yes, schedule it, op.issue_time = time Mark resources busy in RU_map relative to issue time Remove op from UNSCHEDULED/READY sets No, continue - 5 -

Cycle Scheduling Example 0, op priority 8 9 7 4 6 5 5 6 7 4 8 9 0 0, 0 m, m 4, 5m 4, 7 8 9 7, 7 0 8, 8 6 4, 4 6, 6 7m 0, 5 RU_map time ALU MEM 0 4 5 6 7 8 9 Schedule time Ready Placed 0 4 5 6 7 8 9-6 -

List Scheduling (Operation Scheduler) Build dependence graph, calculate priority Add all ops to UNSCHEDULED set while (UNSCHEDULED not empty)» op = operation in UNSCHEDULED with highest priority» For time = estart to some deadline Ÿ Op can be scheduled at current time? (are resources free?) Yes, schedule it, op.issue_time = time Mark resources busy in RU_map relative to issue time Remove op from UNSCHEDULED No, continue» Deadline reached w/o scheduling op? (could not be scheduled) Yes, unplace all conflicting ops at op.estart, add them to UNSCHEDULED Schedule op at estart Mark resources busy in RU_map relative to issue time Remove op from UNSCHEDULED - 7 -

Classroom Problem Operation Scheduling Machine: issue, memory port, ALU Memory port = cycles, pipelined ALU = cycle RU_map Schedule,5 5 0,, m,4 6 5,5 8 6,6 0 m 0,0 4m, 7 4,4 9m 0,4 time ALU MEM 0 4 5 6 7 8 9 time Ready Placed 0 4 5 6 7 8 9. Calculate height-based priorities. Schedule using Operation scheduler - 8 -

Generalize Beyond a Basic Block Superblock» Single entry» Multiple exits (side exits)» No side entries Schedule just like a BB» Priority calculations needs change» Dealing with control deps - 9 -

Change Focus to Scheduling Loops Most of program execution time is spent in loops Problem: How do we achieve compact schedules for loops for (j=0; j<00; j++) b[j] = a[j] * 6 Loop: r = _a r = _b r9 = r * 4 : r = load(r) : r4 = r * 6 : store (r, r4) 4: r = r + 4 5: r = r + 4 6: p = cmpp (r < r9) 7: brct p Loop - 0 -

Basic Approach List Schedule the Loop Body time Iteration n Schedule each iteration resources: 4 issue, alu, mem, br latencies: add=, mpy=, ld =, st =, br = : r = load(r) : r4 = r * 6 : store (r, r4) 4: r = r + 4 5: r = r + 4 6: p = cmpp (r < r9) 7: brct p Loop time ops 0, 4 6-4 - 5, 5, 7 Total time = 6 * n - -

Unroll Then Schedule Larger Body time Iteration,,4 5,6 n-,n Schedule each iteration resources: 4 issue, alu, mem, br latencies: add=, cmpp =, mpy=, ld =, st =, br = : r = load(r) : r4 = r * 6 : store (r, r4) 4: r = r + 4 5: r = r + 4 6: p = cmpp (r < r9) 7: brct p Loop time ops 0, 4, 6, 4, 6 4-5, 5, 7 6,5,7 Total time = 7 * n/ - -

Problems With Unrolling Code bloat» Typical unroll is 4-6x» Use profile statistics to only unroll important loops» But still, code grows fast Barrier after across unrolled bodies» I.e., for unroll, can only overlap iterations and, and 4, Does this mean unrolling is bad?» No, in some settings its very useful Ÿ Low trip count Ÿ Lots of branches in the loop body» But, in other settings, there is room for improvement - -

Overlap Iterations Using Pipelining time Iteration n n With hardware pipelining, while one instruction is in fetch, another is in decode, another in execute. Same thing here, multiple iterations are processed simultaneously, with each instruction in a separate stage. iteration still takes the same time, but time to complete n iterations is reduced! - 4 -

A Software Pipeline time A B A C B A Prologue - fill the pipe Loop body with 4 ops A B C D D C B A D C B A D C B A D C B D C D Kernel steady state Epilogue - drain the pipe Steady state: 4 iterations executed simultaneously, operation from each iteration. Every cycle, an iteration starts and finishes when the pipe is full. - 5 -

Creating Software Pipelines Lots of software pipelining techniques out there Modulo scheduling» Most widely adopted» Practical to implement, yields good results Conceptual strategy» Unroll the loop completely» Then, schedule the code completely with constraints Ÿ All iteration bodies have identical schedules Ÿ Each iteration is scheduled to start some fixed number of cycles later than the previous iteration» Initiation Interval (II) = fixed delay between the start of successive iterations» Given the constraints, the unrolled schedule is repetitive (kernel) except the portion at the beginning (prologue) and end (epilogue) Ÿ Kernel can be re-rolled to yield a new loop - 6 -

Creating Software Pipelines () Create a schedule for iteration of the loop such that when the same schedule is repeated at intervals of II cycles» No intra-iteration dependence is violated» No inter-iteration dependence is violated» No resource conflict arises between operation in same or distinct iterations We will start out assuming Itanium-style hardware support, then remove it later» Rotating registers» Predicates» Software pipeline loop branch - 7 -

Terminology time Iter II Iter Iter Initiation Interval (II) = fixed delay between the start of successive iterations Each iteration can be divided into stages consisting of II cycles each Number of stages in iteration is termed the stage count (SC) Takes SC- cycles to fill/drain the pipe - 8 -

Resource Usage Legality Need to guarantee that» No resource is used at points in time that are separated by an interval which is a multiple of II» I.E., within a single iteration, the same resource is never used more than x at the same time modulo II» Known as modulo constraint, where the name modulo scheduling comes from» Modulo reservation table solves this problem Ÿ To schedule an op at time T needing resource R The entry for R at T mod II must be free Ÿ Mark busy at T mod II if schedule II = 0 alu alu mem bus0 bus br - 9 -

Dependences in a Loop Need worry about kinds» Intra-iteration» Inter-iteration Delay» Minimum time interval between the start of operations» Operation read/write times Distance» Number of iterations separating the operations involved» Distance of 0 means intraiteration Recurrence manifests itself as a circuit in the dependence graph <,> <,0> 4 <,> <,0> <,> Edges annotated with tuple <delay, distance> - 0 -

Dynamic Single Assignment (DSA) Form Impossible to overlap iterations because each iteration writes to the same register. So, we ll have to remove the anti and output dependences. Virtual rotating registers * Each register is an infinite push down array (Expanded virtual reg or EVR) * Write to top element, but can reference any element * Remap operation slides everything down à r[n] changes to r[n+] A program is in DSA form if the same virtual register (EVR element) is never assigned to more than x on any dynamic execution path : r = load(r) : r4 = r * 6 : store (r, r4) 4: r = r + 4 5: r = r + 4 6: p = cmpp (r < r9) 7: brct p Loop DSA conversion - - : r[-] = load(r[0]) : r4[-] = r[-] * 6 : store (r[0], r4[-]) 4: r[-] = r[0] + 4 5: r[-] = r[0] + 4 6: p[-] = cmpp (r[-] < r9) remap r, r, r, r4, p 7: brct p[-] Loop

Physical Realization of EVRs EVR may contain an unlimited number values» But, only a finite contiguous set of elements of an EVR are ever live at any point in time» These must be given physical registers Conventional register file» Remaps are essentially copies, so each EVR is realized by a set of physical registers and copies are inserted Rotating registers» Direct support for EVRs» No copies needed» File rotated after each loop iteration is completed - -

Loop Dependence Example, : r[-] = load(r[0]) : r4[-] = r[-] * 6 : store (r[0], r4[-]) 4: r[-] = r[0] + 4 5: r[-] = r[0] + 4 6: p[-] = cmpp (r[-] < r9) remap r, r, r, r4, p 7: brct p[-] Loop 0,0,0,0 4 5,0 0,0,,, In DSA form, there are no inter-iteration anti or output dependences! 6,0 7 <delay, distance> - -

Homework Problem Latencies: ld =, st =, add =, cmpp =, br = : r[-] = load(r[0]) : r[-] = r[] r[] : store (r[-], r[0]) 4: r[-] = r[0] + 4 5: p[-] = cmpp (r[-] < 00) remap r, r, r 6: brct p[-] Loop Draw the dependence graph showing both intra and inter iteration dependences - 4 -

To be continued.. - 5 -