361 Computer Architecture Lecture 14: Cache Memory



Similar documents
Memory Hierarchy. Arquitectura de Computadoras. Centro de Investigación n y de Estudios Avanzados del IPN. adiaz@cinvestav.mx. MemoryHierarchy- 1

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

The Big Picture. Cache Memory CSE Memory Hierarchy (1/3) Disk

Chapter 1 Computer System Overview

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Operating Systems. Virtual Memory

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Memory ICS 233. Computer Architecture and Assembly Language Prof. Muhamed Mudawar

Computer Architecture

COS 318: Operating Systems. Virtual Memory and Address Translation

Solid State Drive Architecture

1. Memory technology & Hierarchy

Slide Set 8. for ENCM 369 Winter 2015 Lecture Section 01. Steve Norman, PhD, PEng

What is a bus? A Bus is: Advantages of Buses. Disadvantage of Buses. Master versus Slave. The General Organization of a Bus

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Homework # 2. Solutions. 4.1 What are the differences among sequential access, direct access, and random access?

CSCA0102 IT & Business Applications. Foundation in Business Information Technology School of Engineering & Computing Sciences FTMS College Global

Outline. Cache Parameters. Lecture 5 Cache Operation

Price/performance Modern Memory Hierarchy

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

x64 Servers: Do you want 64 or 32 bit apps with that server?

Physical Data Organization

A3 Computer Architecture

Lecture 17: Virtual Memory II. Goals of virtual memory

Record Storage and Primary File Organization

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

Storing Data: Disks and Files

Computer Architecture

Board Notes on Virtual Memory

Communication Protocol

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

HP Smart Array Controllers and basic RAID performance factors

Central Processing Unit (CPU)

Chapter 11 I/O Management and Disk Scheduling

Bandwidth Calculations for SA-1100 Processor LCD Displays

INTRODUCTION The collection of data that makes up a computerized database must be stored physically on some computer storage medium.

CS/COE1541: Introduction to Computer Architecture. Memory hierarchy. Sangyeun Cho. Computer Science Department University of Pittsburgh

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

CS 6290 I/O and Storage. Milos Prvulovic

Random Access Memory (RAM) Types of RAM. RAM Random Access Memory Jamie Tees SDRAM. Micro-DIMM SO-DIMM

With respect to the way of data access we can classify memories as:

OPERATING SYSTEM - VIRTUAL MEMORY

CHAPTER 7: The CPU and Memory

SDRAM and DRAM Memory Systems Overview

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

The Classical Architecture. Storage 1 / 36

18-548/ Associativity 9/16/98. 7 Associativity / Memory System Architecture Philip Koopman September 16, 1998

Computer Systems Structure Main Memory Organization

Chapter 12. Paging an Virtual Memory Systems

SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Buffer Management 5. Buffer Management

Switch Fabric Implementation Using Shared Memory

Data Storage - I: Memory Hierarchies & Disks

Byte Ordering of Multibyte Data Items

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

Secure Cloud Storage and Computing Using Reconfigurable Hardware

Topics in Computer System Performance and Reliability: Storage Systems!

MICROPROCESSOR AND MICROCOMPUTER BASICS

Chapter 11 I/O Management and Disk Scheduling

Unit Storage Structures 1. Storage Structures. Unit 4.3

Outline: Operating Systems

6. Storage and File Structures

a storage location directly on the CPU, used for temporary storage of small amounts of data during processing.

Chapter 13: Query Processing. Basic Steps in Query Processing

This Unit: Caches. CIS 501 Introduction to Computer Architecture. Motivation. Types of Memory

University of Dublin Trinity College. Storage Hardware.

CSE 160 Lecture 5. The Memory Hierarchy False Sharing Cache Coherence and Consistency. Scott B. Baden

Data De-duplication Methodologies: Comparing ExaGrid s Byte-level Data De-duplication To Block Level Data De-duplication

1. Computer System Structure and Components

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Configuring and using DDR3 memory with HP ProLiant Gen8 Servers

Addressing The problem. When & Where do we encounter Data? The concept of addressing data' in computations. The implications for our machine design(s)

Choosing a Computer for Running SLX, P3D, and P5

lesson 1 An Overview of the Computer System

İSTANBUL AYDIN UNIVERSITY

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Robert Wagner

Hardware Configuration Guide

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-17: Memory organisation, and types of memory

Exceptions to the Rule: Essbase Design Principles That Don t Always Apply

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

NAND Flash FAQ. Eureka Technology. apn5_87. NAND Flash FAQ

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Here is a diagram of a simple computer system: (this diagram will be the one needed for exams) CPU. cache

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

Block diagram of typical laptop/desktop

CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

Navigating Big Data with High-Throughput, Energy-Efficient Data Partitioning

Transcription:

1 361 Computer Architecture Lecture 14 Memory cache.1 The Motivation for s Memory System Processor DRAM Motivation Large memories (DRAM) are slow Small memories (SRAM) are fast Make the average access time small by Servicing most accesses from a small, fast memory. Reduce the bandwidth required of the large memory cache.2

2 Outline of Today s Lecture Recap of Memory Hierarchy & Introduction to A In-depth Look at the Operation of Write and Replacement Policy Summary cache.3 An Expanded View of the Memory System Processor Control Memory Memory Datapath Memory Memory Memory Speed Size Cost Fastest Smallest Highest Slowest Biggest Lowest cache.4

3 Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers 1s Bytes <1s ns K Bytes 1-1 ns $.1-.1/bit Main Memory M Bytes 1ns-1us $.1-.1 Disk G Bytes ms -3-4 1-1 cents Tape infinite sec-min 1-6 Registers Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level cache.5 The Principle of Locality Probability of reference Address Space 2 What are the principles of Locality? cache.6

4 The Principle of Locality Probability of reference Address Space 2 The Principle of Locality Program access a relatively small portion of the address space at any instant of time. Example 9% of time in 1% of the code Two Different Types of Locality Temporal Locality (Locality in Time) If an item is referenced, it will tend to be referenced again soon. Spatial Locality (Locality in Space) If an item is referenced, items whose addresses are close by tend to be referenced soon. cache.7 Memory Hierarchy Principles of Operation At any given time, data is copied between only 2 adjacent levels Upper Level () the one closer to the processor - Smaller, faster, and uses more expensive technology Lower Level (Memory) the one further away from the processor - Bigger, slower, and uses less expensive technology Block The minimum unit of information that can either be present or not present in the two level hierarchy To Processor From Processor Upper Level () Blk X Lower Level (Memory) Blk Y cache.8

5 Memory Hierarchy Terminology Hit data appears in some block in the upper level (example Block X) Hit Rate the fraction of memory access found in the upper level Hit Time Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss data needs to be retrieve from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty = Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty To Processor From Processor Upper Level () Blk X Lower Level (Memory) Blk Y cache.9 Basic Terminology Typical Values Typical Values Block (line) size 4-128 bytes Hit time 1-4 cycles Miss penalty 8-32 cycles (and increasing) (access time) (6-1 cycles) (transfer time) (2-22 cycles) Miss rate 1% - 2% Size 1 KB - 256 KB cache.1

6 How Does Work? Temporal Locality (Locality in Time) If an item is referenced, it will tend to be referenced again soon. Keep more recently accessed data items closer to the processor Spatial Locality (Locality in Space) If an item is referenced, items whose addresses are close by tend to be referenced soon. Move blocks consists of contiguous words to the cache To Processor From Processor Upper Level Blk X Lower Level Memory Blk Y cache.11 The Simplest Direct Mapped Memory Address 1 2 3 4 5 6 7 8 9 A B C D E F Memory 4 Byte Direct Mapped Index 1 2 3 Location can be occupied by data from Memory location, 4, 8,... etc. In general any memory location whose 2 LSBs of the address are s Address<1> => cache index Which one should we place in the cache? How can we tell which one is in the cache? cache.12

7 Tag and Index Assume a 32-bit memory (byte ) address A 2**N bytes direct mapped cache - Index The lower N bits of the memory address - Tag The upper (32 - N) bits of the memory address 31 N Tag Example x5 Index Ex x3 Valid Bit Stored as part of the cache state x5 2 N Bytes Direct Mapped Byte Byte 1 Byte 2 Byte 3 Byte 2**N -1 1 2 3 2 N- 1 cache.13 Access Example Start Up V Tag Data Access 1 (miss) Access 1 (HIT) M [1] 1 M [11] Miss Handling Load Data Write Tag & Set V Access 1 1 (miss) M [1] Access 1 1 (HIT) M [1] 1 M [11] Load Data Write Tag & Set V M [1] 1 M [11] Sad Fact of Life A lot of misses at start up Compulsory Misses - (Cold start misses) cache.14

8 Definition of a Block Block the cache data that has in its own cache tag Our previous extreme example 4-byte Direct Mapped cache Block Size = 1 Byte Take advantage of Temporal Locality If a byte is referenced, it will tend to be referenced soon. Did not take advantage of Spatial Locality If a byte is referenced, its adjacent bytes will be referenced soon. In order to take advantage of Spatial Locality increase the block size Valid Tag Direct Mapped Data Byte Byte 1 Byte 2 Byte 3 cache.15 Example 1 KB Direct Mapped with 32 B Blocks For a 2 ** N byte cache The uppermost (32 - N) bits are always the Tag The lowest M bits are the Byte Select (Block Size = 2 ** M) 31 Tag Example x5 Stored as part of the cache state 9 Index Ex x1 4 Byte Select Ex x Valid Bit Tag x5 Data Byte 31 Byte 63 Byte 1 Byte 33 Byte Byte 32 1 2 3 Byte 123 Byte 992 31 cache.16

9 Block Size Tradeoff In general, larger block size take advantage of spatial locality BUT Larger block size means larger miss penalty - Takes longer time to fill up the block If block size is too big relative to cache size, miss rate will go up Average Access Time = Hit Time x (1 - Miss Rate) + Miss Penalty x Miss Rate Miss Penalty Miss Rate Exploits Spatial Locality Fewer blocks compromises temporal locality Average Access Time Increased Miss Penalty & Miss Rate Block Size Block Size Block Size cache.17 Another Extreme Example Valid Bit Tag Data Byte 3 Byte 2 Byte 1 Byte Size = 4 bytes Block Size = 4 bytes Only ONE entry in the cache True If an item is accessed, likely that it will be accessed again soon But it is unlikely that it will be accessed again immediately!!! The next access will likely to be a miss again - Continually loading data into the cache but discard (force out) them before they are used again - Worst nightmare of a cache designer Ping Pong Effect Conflict Misses are misses caused by Different memory locations mapped to the same cache index - Solution 1 make the cache size bigger - Solution 2 Multiple entries for the same Index cache.18

1 A Two-way Set Associative N-way set associative N entries for each Index N direct mapped caches operates in parallel Valid Example Two-way set associative cache Index selects a set from the cache The two tags in the set are compared in parallel Data is selected based on the tag result Tag Data Block Index Data Block Tag Valid Adr Tag Compare Sel1 1 Mux Sel Compare OR Hit Block cache.19 Disadvantage of Set Associative N-way Set Associative versus Direct Mapped N comparators vs. 1 Extra MUX delay for the data Data comes AFTER Hit/Miss In a direct mapped cache, Block is available BEFORE Hit/Miss Possible to assume a hit and continue. Recover later if miss. Valid Tag Data Block Index Data Block Tag Valid Adr Tag Compare Sel1 1 Mux Sel Compare cache.2 OR Hit Block

11 And yet Another Extreme Example Fully Associative Fully Associative -- push the set associative idea to its limit! Forget about the Index Compare the Tags of all cache entries in parallel Example Block Size = 2 B blocks, we need N 27-bit comparators By definition Conflict Miss = for a fully associative cache 31 Tag (27 bits long) 4 Byte Select Ex x1 X X X X Tag Valid Bit Data Byte 31 Byte 63 Byte 1 Byte 33 Byte Byte 32 X cache.21 A Summary on Sources of Misses Compulsory (cold start, first reference) first access to a block Cold fact of life not a whole lot you can do about it Conflict (collision) Multiple memory locations mapped to the same cache location Solution 1 increase cache size Solution 2 increase associativity Capacity cannot contain all blocks access by the program Solution increase cache size Invalidation other process (e.g., I/O) updates memory cache.22

12 Source of Misses Quiz Direct Mapped N-way Set Associative Fully Associative Size Compulsory Miss Conflict Miss Capacity Miss Invalidation Miss Categorize as high, medium, low, zero cache.23 Sources of Misses Answer Direct Mapped N-way Set Associative Fully Associative Size Big Medium Small Compulsory Miss See Note High (but who cares!) Medium Low Conflict Miss High Medium Zero Capacity Miss Invalidation Miss Low Medium High Same Same Same Note If you are going to run billions of instruction, Compulsory Misses are insignificant. cache.24

13 Summary The Principle of Locality Program access a relatively small portion of the address space at any instant of time. - Temporal Locality Locality in Time - Spatial Locality Locality in Space Three Major Categories of Misses Compulsory Misses sad facts of life. Example cold start misses. Conflict Misses increase cache size and/or associativity. Nightmare Scenario ping pong effect! Capacity Misses increase cache size cache.25 What about.. Replacement Writing NEXT CLASS cache.26

14 The Need to Make a Decision! Direct Mapped Each memory location can only mapped to 1 cache location No need to make any decision -) - Current item replaced the previous item in that cache location N-way Set Associative Each memory location have a choice of N cache locations Fully Associative Each memory location can be placed in ANY cache location miss in a N-way Set Associative or Fully Associative Bring in new block from memory Throw out a cache block to make room for the new block Damn! We need to make a decision on which block to throw out! cache.27 cache.28 Block Replacement Policy Random Replacement Hardware randomly selects a cache item and throw it out Least Recently Used Hardware keeps track of the access history Replace the entry that has not been used for the longest time Example of a Simple Pseudo Least Recently Used Implementation Assume 64 Fully Associative Entries Hardware replacement pointer points to one cache entry Whenever an access is made to the entry the pointer points to - Move the pointer to the next entry Otherwise do not move the pointer Entry Replacement Pointer Entry 1 Entry 63

15 Write Policy Write Through versus Write Back read is much easier to handle than cache write Instruction cache is much easier to design than data cache write How do we keep data in the cache and memory consistent? Two options (decision time again -) Write Back write to cache only. Write the cache block to memory when that cache block is being replaced on a cache miss. - Need a dirty bit for each cache block - Greatly reduce the memory bandwidth requirement - Control can be complex Write Through write to cache and memory at the same time. - What!!! How can this be? Isn t memory too slow for this? cache.29 Write Buffer for Write Through Processor DRAM Write Buffer A Write Buffer is needed between the and Memory Processor writes data into the cache and the write buffer Memory controller write contents of the buffer to memory Write buffer is just a FIFO Typical number of entries 4 Works fine if Store frequency (w.r.t. time) << 1 / DRAM write cycle Memory system designer s nightmare Store frequency (w.r.t. time) -> 1 / DRAM write cycle Write buffer saturation cache.3

16 Write Buffer Saturation Processor DRAM Write Buffer Store frequency (w.r.t. time) -> 1 / DRAM write cycle If this condition exist for a long period of time (CPU cycle time too quick and/or too many store instructions in a row) - Store buffer will overflow no matter how big you make it - The CPU Cycle Time <= DRAM Write Cycle Time Solution for write buffer saturation Use a write back cache Install a second level (L2) cache Processor L2 DRAM cache.31 Write Buffer Write Allocate versus Not Allocate Assume a 16-bit write to memory location x and causes a miss Do we read in the rest of the block (Byte 2, 3,... 31)? Yes Write Allocate No Write Not Allocate 31 Tag Example x 9 Index Ex x 4 Byte Select Ex x Valid Bit Tag x Data Byte 31 Byte 63 Byte 1 Byte 33 Byte Byte 32 1 2 3 Byte 123 Byte 992 31 cache.32

17 What is a Sub-block? Sub-block A unit within a block that has its own valid bit Example 1 KB Direct Mapped, 32-B Block, 8-B Sub-block - Each cache entry will have 32/8 = 4 valid bits Write miss only the bytes in that sub-block is brought in. Tag SB3 s V Bit SB2 s V Bit SB1 s V Bit SB s V Bit Data B31 B24 Sub-block3 Sub-block2 Sub-block1 B7 B Sub-block 1 2 3 Byte 123 Byte 992 31 cache.33 SPARCstation 2 s Memory System Memory Controller Memory Bus (SIMM Bus) 128-bit wide datapath Processor Bus (Mbus) 64-bit wide Memory Module 7 External Memory Module 6 Memory Module 5 Memory Module 4 Memory Module 3 Memory Module 2 Memory Module 1 Processor Module (Mbus Module) SuperSPARC Processor Instruction Data Register File Memory Module cache.34

18 SPARCstation 2 s External External 1 MB Direct Mapped Write Back Write Allocate Processor Module (Mbus Module) SuperSPARC Processor Instruction Data Register File SPARCstation 2 s External Size and organization 1 MB, direct mapped Block size 128 B Sub-block size 32 B Write Policy Write back, write allocate cache.35 SPARCstation 2 s Internal Instruction Processor Module (Mbus Module) SuperSPARC Processor External I- 2 KB 5-way Register File 1 MB Direct Mapped Write Back Write Allocate Data SPARCstation 2 s Internal Instruction Size and organization 2 KB, 5-way Set Associative Block size 64 B Sub-block size 32 B Write Policy Does not apply Note Sub-block size the same as the External (L2) cache.36

19 SPARCstation 2 s Internal Data Processor Module (Mbus Module) SuperSPARC Processor External I- 2 KB 5-way Register File 1 MB Direct Mapped Write Back Write Allocate D- 16 KB 4-way WT, WNA SPARCstation 2 s Internal Data Size and organization 16 KB, 4-way Set Associative Block size 64 B Sub-block size 32 B Write Policy Write through, write not allocate Sub-block size the same as the External (L2) cache.37 Two Interesting Questions? Processor Module (Mbus Module) SuperSPARC Processor External I- 2 KB 5-way Register File 1 MB Direct Mapped Write Back Write Allocate D- 16 KB 4-way WT, WNA Why did they use N-way set associative cache internally? Answer A N-way set associative cache is like having N direct mapped caches in parallel. They want each of those N direct mapped cache to be 4 KB. Same as the virtual page size. Virtual Page Size cover in next week s virtual memory lecture How many levels of cache does SPARCstation 2 has? Answer Three levels. (1) Internal I & D caches, (2) External cache and (3)... cache.38

2 SPARCstation 2 s Memory Module Supports a wide range of sizes Smallest 4 MB 16 2Mb DRAM chips, 8 KB of Page Mode SRAM Biggest 64 MB 32 16Mb chips, 16 KB of Page Mode SRAM DRAM Chip 15 512 cols DRAM Chip 256K x 8 = 2 MB 512 rows 256K x 8 = 2 MB 8 bits 512 x 8 SRAM bits<127> 512 x 8 SRAM bits<7> Memory Bus<127> cache.39 Summary The Principle of Locality Program access a relatively small portion of the address space at any instant of time. - Temporal Locality Locality in Time - Spatial Locality Locality in Space Three Major Categories of Misses Compulsory Misses sad facts of life. Example cold start misses. Conflict Misses increase cache size and/or associativity. Nightmare Scenario ping pong effect! Capacity Misses increase cache size Write Policy Write Through need a write buffer. Nightmare WB saturation Write Back control can be complex cache.4