Advanced cache optimizations - overview. Why More on Memory Hierarchy?

Similar documents

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Software Pipelining. for (i=1, i<100, i++) { x := A[i]; x := x+1; A[i] := x

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER

Optimizing matrix multiplication Amitabha Banerjee

Memory Hierarchy. Arquitectura de Computadoras. Centro de Investigación n y de Estudios Avanzados del IPN. adiaz@cinvestav.mx. MemoryHierarchy- 1

1. Memory technology & Hierarchy

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Enterprise Applications

Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007

361 Computer Architecture Lecture 14: Cache Memory

WAR: Write After Read

Computer Architecture

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

Matrix Multiplication

Multi-Threading Performance on Commodity Multi-Core Processors

Thread level parallelism

Improving System Scalability of OpenMP Applications Using Large Page Support

Intel Pentium 4 Processor on 90nm Technology

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Memory ICS 233. Computer Architecture and Assembly Language Prof. Muhamed Mudawar

HY345 Operating Systems

Concept of Cache in web proxies

The Microarchitecture of Superscalar Processors

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Solution: start more than one instruction in the same clock cycle CPI < 1 (or IPC > 1, Instructions per Cycle) Two approaches:

Administration. Instruction scheduling. Modern processors. Examples. Simplified architecture model. CS 412 Introduction to Compilers

CS 147: Computer Systems Performance Analysis

StrongARM** SA-110 Microprocessor Instruction Timing

Architecture of Hitachi SR-8000

Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui

Performance Impacts of Non-blocking Caches in Out-of-order Processors

EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May ILP Execution

The Big Picture. Cache Memory CSE Memory Hierarchy (1/3) Disk

Introduction to Microprocessors

CS/COE1541: Introduction to Computer Architecture. Memory hierarchy. Sangyeun Cho. Computer Science Department University of Pittsburgh

Giving credit where credit is due

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

The Orca Chip... Heart of IBM s RISC System/6000 Value Servers

CS:APP Chapter 4 Computer Architecture. Wrap-Up. William J. Taffe Plymouth State University. using the slides of

Basic Performance Measurements for AMD Athlon 64, AMD Opteron and AMD Phenom Processors

Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two

Board Notes on Virtual Memory

This Unit: Caches. CIS 501 Introduction to Computer Architecture. Motivation. Types of Memory

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX

2

PROBLEMS #20,R0,R1 #$3A,R2,R4

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM

Chapter 1 Computer System Overview

SOC architecture and design

IBM CELL CELL INTRODUCTION. Project made by: Origgi Alessandro matr Teruzzi Roberto matr IBM CELL. Politecnico di Milano Como Campus

CISC, RISC, and DSP Microprocessors

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Design Cycle for Microprocessors

Lecture 17: Virtual Memory II. Goals of virtual memory

Instruction Set Architecture (ISA)

Intel 8086 architecture

Static Scheduling. option #1: dynamic scheduling (by the hardware) option #2: static scheduling (by the compiler) ECE 252 / CPS 220 Lecture Notes

and RISC Optimization Techniques for the Hitachi SR8000 Architecture

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture

Introduction to GPU Architecture

Computer Organization and Components

Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu

a storage location directly on the CPU, used for temporary storage of small amounts of data during processing.

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

High Performance Processor Architecture. André Seznec IRISA/INRIA ALF project-team

Putting it all together: Intel Nehalem.

Introduction to GPU Programming Languages

Why Latency Lags Bandwidth, and What it Means to Computing

64-Bit versus 32-Bit CPUs in Scientific Computing

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

AMD Opteron Quad-Core

What is a bus? A Bus is: Advantages of Buses. Disadvantage of Buses. Master versus Slave. The General Organization of a Bus

Introduction to RISC Processor. ni logic Pvt. Ltd., Pune

VLIW Processors. VLIW Processors

Lecture 1: the anatomy of a supercomputer

Five Families of ARM Processor IP

Parallel Computing 37 (2011) Contents lists available at ScienceDirect. Parallel Computing. journal homepage:

OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA

Operating Systems. Virtual Memory

Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

CUDA Optimization with NVIDIA Tools. Julien Demouth, NVIDIA

Processor Architectures

Parallel Programming Survey

Chapter 4 System Unit Components. Discovering Computers Your Interactive Guide to the Digital World

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

18-548/ Associativity 9/16/98. 7 Associativity / Memory System Architecture Philip Koopman September 16, 1998

Introduction to Cloud Computing

Getting the Performance Out Of High Performance Computing. Getting the Performance Out Of High Performance Computing

x64 Servers: Do you want 64 or 32 bit apps with that server?

Optimizing Code for Accelerators: The Long Road to High Performance

Facebook: Cassandra. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Overview Design Evaluation

RUNAHEAD EXECUTION: AN EFFECTIVE ALTERNATIVE TO LARGE INSTRUCTION WINDOWS

Generations of the computer. processors.

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture?

Transcription:

Advanced cache optimizations - overview Why More on Memory Hierarchy? 100,000 10,000 Performance 1,000 100 Processor Processor-Memory Performance Gap Growing 10 Memory 1 1980 1985 1990 1995 2000 2005 2010 Year

Review: 6 Basic Cache Optimizations Reducing hit time 1. Giving Reads Priority over Writes E.g., Read complete before earlier writes in write buffer 2. Avoiding Address Translation during Cache Indexing Reducing Miss Penalty 3. Multilevel Caches Reducing Miss Rate 4. Larger Block size (Compulsory misses) 5. Larger Cache size (Capacity misses) 6. Higher Associativity (Conflict misses) 11 Advanced Cache Optimizations Reducing hit time 1.Small and simple caches 2. Way prediction 3.Trace caches Increasing cache bandwidth 4.Pipelined caches 5.Multibanked caches 6.Nonblocking caches Reducing Miss Penalty 7. Critical word first 8. Merging write buffers Reducing Miss Rate 9. Compiler optimizations Reducing miss penalty or miss rate via parallelism 10.Hardware prefetching 11.Compiler prefetching

1. Fast Hit times via Small and Simple Caches Index tag memory and then compare takes time Small cache can help hit time since smaller memory takes less time to index Also L2 cache small enough to fit on chip with the processor avoids time penalty of going off chip Simple direct mapping Access time (ns) 2.50 2.00 1.50 1.00 0.50-1-way 2-way 4-way 8-way 16 KB 32 KB 64 KB 128 KB 256 KB 512 KB 1 MB Cache size 2. Fast Hit times via Way Prediction The idea: combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache Way prediction: keep extra bits in cache to predict the way, or block within the set, of next cache access. Miss 1 st check other blocks for matches in next clock cycle Hit Time Way-Miss Hit Time Miss Penalty Accuracy can be as high as 85%

3. Fast Hit times via Trace Cache (Pentium 4) P4 translated X86 to RISC micro-ops 1. Dynamic traces of the executed instructions vs. static sequences of instructions as determined by layout in memory Built-in branch predictor 2. Cache the micro-ops vs. x86 instructions Decode/translate from x86 to micro-ops on trace cache miss - Problems: - complicated address mapping since addresses no longer aligned to power-of-2 multiples of word size - instructions may appear multiple times in multiple dynamic traces due to different branch outcomes 4: Increasing Cache Bandwidth by Pipelining Pipeline cache access to maintain bandwidth, but higher latency Instruction cache access pipeline stages: 1: Pentium 2: Pentium Pro through Pentium III 4: Pentium 4 - greater penalty on mispredicted branches - more clock cycles between the issue of the load and the use of the data

5. Increasing Cache Bandwidth: Non-Blocking Caches For Out-of-order execution, the processor need not stall on a cache miss. hit under miss reduces the effective miss penalty by working during miss vs. ignoring CPU requests Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss requires multi-bank memories 6: Increasing Cache Bandwidth via Multiple Banks Rather than treat the cache as a single monolithic block, divide into independent banks that can support simultaneous accesses Banking works best when accesses naturally spread themselves across banks mapping of addresses to banks affects behavior of memory system Simple mapping that works well is sequential interleaving Spread block addresses sequentially across banks E,g, if there 4 banks, Bank 0 has all blocks whose address modulo 4 is 0; bank 1 has all blocks whose address modulo 4 is 1;

7. Reduce Miss Penalty: Early Restart and Critical Word First Don t wait for full block before restarting CPU Early restart As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution Critical Word First Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block Long blocks more popular today Critical Word 1 st Widely used 8. Merging Write Buffer to Reduce Miss Penalty Write buffer to allow processor to continue while waiting to write to memory When a new entry is loaded in the buffer, its address is checked against the other blocks in the buffer If there s a match, blocks are combined

9. Reducing Misses by Compiler Optimizations McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software Instructions Reorder procedures in memory so as to reduce conflict misses Profiling to look at conflicts(using tools they developed) Data Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays Loop Interchange: change nesting of loops to access data in order stored in memory Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap Blocking: Improve temporal locality by accessing blocks of data repeatedly vs. going down whole columns or rows Merging Arrays Example /* Before: 2 sequential arrays */ int val[size]; int key[size]; /* After: 1 array of stuctures */ struct merge { int val; int key; }; struct merge merged_array[size]; Reducing conflicts between val & key; improve spatial locality

Loop Interchange Example /* Before */ for (k = 0; k < 100; k = k1) for (j = 0; j < 100; j = j1) for (i = 0; i < 5000; i = i1) x[i][j] = 2 * x[i][j]; /* After */ for (k = 0; k < 100; k = k1) for (i = 0; i < 5000; i = i1) for (j = 0; j < 100; j = j1) x[i][j] = 2 * x[i][j]; Sequential accesses instead of striding through memory every 100 words; improved spatial locality Loop Fusion Example /* Before */ for (i = 0; i < N; i = i1) for (j = 0; j < N; j = j1) a[i][j] = 1/b[i][j] * c[i][j]; for (i = 0; i < N; i = i1) for (j = 0; j < N; j = j1) d[i][j] = a[i][j] c[i][j]; /* After */ for (i = 0; i < N; i = i1) for (j = 0; j < N; j = j1) { a[i][j] = 1/b[i][j] * c[i][j]; d[i][j] = a[i][j] c[i][j];} 2 misses per access to a & c vs. one miss per access; improve spatial locality

Blocking Example /* Before */ for (i = 0; i < N; i = i1) for (j = 0; j < N; j = j1) {r = 0; for (k = 0; k < N; k = k1){ r = r y[i][k]*z[k][j];}; x[i][j] = r; }; Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[] Capacity Misses a function of N & Cache Size: 2N 3 N 2 => (assuming no conflict; otherwise ) Idea: compute on BxB submatrix that fits 10. Reducing Misses by Hardware Prefetching of Instructions & Data Prefetching relies on having extra memory bandwidth that can be used without penalty Instruction Prefetching Typically, CPU fetches 2 blocks on a miss: the requested block and the next consecutive block. Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer

11. Reducing Misses by Software Prefetching Data Data Prefetch Load data into register (HP PA-RISC loads) Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) Special prefetching instructions cannot cause faults; a form of speculative execution Compiler Optimization vs. Memory Hierarchy Search Compiler tries to figure out memory hierarchy optimizations New approach: Auto-tuners 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, ) and algorithms, then produce C code to be compiled for that computer Auto-tuner targeted to numerical method E.g., PHiPAC (BLAS), Atlas (BLAS), Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

Sparse Matrix Search for Blocking for finite element problem [Im, Yelick, Vuduc, 2005] Mflop/s Best: 4x2 Reference Mflop/s row block size (r) 8 4 2 1 Best Sparse Blocking for 8 Computers IBM Power 4, Intel/HP Itanium All possible column block sizes selected for 8 computers; How could compiler know? Intel Pentium M Intel/HP Itanium 2 IBM Power 3 Sun Ultra 2, Sun Ultra 3, AMD Opteron 1 2 4 8 column block size (c)

Technique Hit Time Bandwidth Mi ss pe nal ty Miss rate HW cost/ complexity Comment Small and simple caches 0 Trivial; widely used Way-predicting caches 1 Used in Pentium 4 Trace caches 3 Used in Pentium 4 Pipelined cache access 1 Widely used Nonblocking caches Banked caches Critical word first and early restart Merging write buffer Compiler techniques to reduce cache misses Hardware prefetching of instructions and data Compiler-controlled prefetching 3 1 2 1 0 2 instr., 3 data 3 Widely used Used in L2 of Opteron and Niagara Widely used Widely used with write through Software is a challenge; some computers have compiler option Many prefetch instructions; AMD Opteron prefetches data Needs nonblocking cache; in many CPUs