Why Latency Lags Bandwidth, and What it Means to Computing



Similar documents
Introduction to Microprocessors

Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database:

Communicating with devices

Computer Architecture

Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison

Enabling Technologies for Distributed Computing

Slide Set 8. for ENCM 369 Winter 2015 Lecture Section 01. Steve Norman, PhD, PEng

Chapter 6. Inside the System Unit. What You Will Learn... Computers Are Your Future. What You Will Learn... Describing Hardware Performance

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Enabling Technologies for Distributed and Cloud Computing

1. Memory technology & Hierarchy

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

Price/performance Modern Memory Hierarchy

Lecture 9: Memory and Storage Technologies

Introduction to RISC Processor. ni logic Pvt. Ltd., Pune

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

Lecture 1: the anatomy of a supercomputer

The Bus (PCI and PCI-Express)

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

1 Storage Devices Summary

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

CS 6290 I/O and Storage. Milos Prvulovic

85MIV2 / 85MIV2-L -- Components Locations

Memory Hierarchy. Arquitectura de Computadoras. Centro de Investigación n y de Estudios Avanzados del IPN. adiaz@cinvestav.mx. MemoryHierarchy- 1

Chapter 5 Cubix XP4 Blade Server

Exploring RAID Configurations

Putting it all together: Intel Nehalem.

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

LSI SAS inside 60% of servers. 21 million LSI SAS & MegaRAID solutions shipped over last 3 years. 9 out of 10 top server vendors use MegaRAID

Chapter Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig I/O devices can be characterized by. I/O bus connections

Q & A From Hitachi Data Systems WebTech Presentation:

Memory Basics ~ ROM, DRAM, SRAM, Cache Memory The article was added by Kyle Duke

CPS104 Computer Organization and Programming Lecture 18: Input-Output. Robert Wagner

Architecture of Hitachi SR-8000

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011

DDR subsystem: Enhancing System Reliability and Yield

OpenSPARC T1 Processor

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Memory ICS 233. Computer Architecture and Assembly Language Prof. Muhamed Mudawar

The Performance of 2 and 4 HyperDrive4 units in RAID0

Parallel Algorithm Engineering

BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA

Chapter 2 - Computer Organization

Multi-Threading Performance on Commodity Multi-Core Processors

18-548/ Associativity 9/16/98. 7 Associativity / Memory System Architecture Philip Koopman September 16, 1998

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX

Computer Systems Structure Main Memory Organization

OBJECTIVE ANALYSIS WHITE PAPER MATCH FLASH. TO THE PROCESSOR Why Multithreading Requires Parallelized Flash ATCHING

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Sistemas Operativos: Input/Output Disks

7a. System-on-chip design and prototyping platforms

1 / 25. CS 137: File Systems. Persistent Solid-State Storage

Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level

Generations of the computer. processors.

2001 Acer Incorporation. All rights reserved. This paper is for informational purposes only. ACER MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

The Motherboard Chapter #5

Building Clusters for Gromacs and other HPC applications

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

Semiconductor Device Technology for Implementing System Solutions: Memory Modules

Outline. Principles of Database Management Systems. Memory Hierarchy: Capacities and access times. CPU vs. Disk Speed

361 Computer Architecture Lecture 14: Cache Memory

Power-Aware High-Performance Scientific Computing

Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to:

Architecting High-Speed Data Streaming Systems. Sujit Basu

Mother Board Component


Building Blocks for PRU Development

Intel 965 Express Chipset Family Memory Technology and Configuration Guide

The Classical Architecture. Storage 1 / 36

ADM5120 HOME GATEWAY CONTROLLER. Product Notes

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

Open Flow Controller and Switch Datasheet

Unit 4: Performance & Benchmarking. Performance Metrics. This Unit. CIS 501: Computer Architecture. Performance: Latency vs.

Accelerating Server Storage Performance on Lenovo ThinkServer

Cisco 7816-I5 Media Convergence Server

CS161: Operating Systems

Configuring Memory on the HP Business Desktop dx5150

Cisco MCS 7825-H3 Unified Communications Manager Appliance

Intel RAID Controllers

The Central Processing Unit:

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Chapter 4 System Unit Components. Discovering Computers Your Interactive Guide to the Digital World

Intel Itanium Quad-Core Architecture for the Enterprise. Lambert Schaelicke Eric DeLano

Symmetric Multiprocessing

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Power Reduction Techniques in the SoC Clock Network. Clock Power

Products. CM-i586 Highlights. Página Web 1 de 5. file://c:\documents and Settings\Daniel\Os meus documentos\humanoid\material_o...

Indexing on Solid State Drives based on Flash Memory

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

HP reference configuration for entry-level SAS Grid Manager solutions

Intel Pentium 4 Processor on 90nm Technology

QuickSpecs. HP Integrity cx2620 Server. Overview

Transcription:

Why Latency Lags Bandwidth, and What it Means to Computing David Patterson U.C. Berkeley patterson@cs.berkeley.edu October 2004 Bandwidth Rocks (1)

Preview: Latency Lags Bandwidth Over last 20 to 25 years, for 4 disparate technologies, Latency Lags Bandwidth: Bandwidth Improved 120X to 2200X But Latency Improved only 4X to 20X Talk explains why and how to cope Relative BW Improve ment 10000 1000 100 10 1 (Latency improvement = Bandwidth improvement) 1 10 100 Relative Latency Improvement Bandwidth Rocks (2)

Outline Drill down into 4 technologies: ~1980 Archaic (Nostalgic) vs. ~2000 Modern (Newfangled) Performance Milestones in each technology Rule of Thumb for BW vs. Latency 6 Reasons it Occurs 3 Ways to Cope 2 Examples BW-oriented system design Is this too Optimistic (its even Worse)? FYI: Latency Labs Bandwidth appears in October, 2004 Communications of ACM Bandwidth Rocks (3)

Disks: Archaic(Nostalgic) v. Modern(Newfangled) CDC Wren I, 1983 3600 RPM 0.03 GBytes capacity Tracks/Inch: 800 Bits/Inch: 9550 Three 5.25 platters Bandwidth: 0.6 MBytes/sec Latency: 48.3 ms Cache: none Seagate 373453, 2003 15000 RPM 73.4 GBytes (4X) (2500X) Tracks/Inch: 64000 (80X) Bits/Inch: 533,000 Four 2.5 platters (in 3.5 form factor) (60X) Bandwidth: 86 MBytes/sec (140X) Latency: 5.7 ms Cache: 8 MBytes (8X) Bandwidth Rocks (4)

Latency Lags Bandwidth (for last ~20 years) 10000 Performance Milestones 1000 Relative BW 100 Improve ment Disk 10 1 (Latency improvement = Bandwidth improvement) 1 10 100 Relative Latency Improvement Disk: 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case) Bandwidth Rocks (5)

Memory:Archaic(Nostalgic)v. Modern(Newfangled) 1980 DRAM (asynchronous) 0.06 Mbits/chip 64,000 xtors, 35 mm 2 16-bit data bus per module, 16 pins/chip 13 Mbytes/sec Latency: 225 ns (no block transfer) 2000 Double Data Rate Synchr. (clocked) DRAM 256.00 Mbits/chip (4000X) 256,000,000 xtors, 204 mm 2 64-bit data bus per DIMM, 66 pins/chip 1600 Mbytes/sec Latency: 52 ns (4X) (120X) (4X) Block transfers (page mode) Bandwidth Rocks (6)

Latency Lags Bandwidth (last ~20 years) 10000 Performance Milestones 1000 elative BW 100 mprove ment 10 1 Memory Disk (Latency improvement = Bandwidth improvement) 1 10 100 Relative Latency Improvement Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk: 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case) Bandwidth Rocks (7)

LANs: Archaic(Nostalgic)v. Modern(Newfangled) Ethernet 802.3 Year of Standard: 1978 10 Mbits/s link speed Latency: 3000 µsec Shared media Coaxial cable Coaxial Cable: Ethernet 802.3ae Year of Standard: 2003 10,000 Mbits/s link speed (1000X) Latency: 190 µsec (15X) Switched media Category 5 copper wire "Cat 5" is 4 twisted pairs in bundle Twisted Pair: Braided outer conductor Insulator Copper core Copper, 1mm thick, twisted to avoid antenna effect Plastic Covering Bandwidth Rocks (8)

Latency Lags Bandwidth (last ~20 years) 10000 Performance Milestones 1000 elative BW 100 mprove ment 10 1 Memory Network Disk (Latency improvement = Bandwidth improvement) 1 10 100 Relative Latency Improvement Ethernet: 10Mb, 100Mb, 1000Mb, 10000 Mb/s (16x,1000x) Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk: 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case) Bandwidth Rocks (9)

CPUs: Archaic(Nostalgic) v. Modern(Newfangled) 1982 Intel 80286 12.5 MHz 2 MIPS (peak) Latency 320 ns 134,000 xtors, 47 mm 2 16-bit data bus, 68 pins Microcode interpreter, separate FPU chip (no caches) 2001 Intel Pentium 4 1500 MHz (120X) 4500 MIPS (peak) (2250X) Latency 15 ns (20X) 42,000,000 xtors, 217 mm 2 64-bit data bus, 423 pins 3-way superscalar, Dynamic translate to RISC, Superpipelined (22 stage), Out-of-Order execution On-chip 8KB Data caches, 96KB Instr. Trace cache, 256KB L2 cache Bandwidth Rocks (10)

Latency Lags Bandwidth (last ~20 years) 10000 Note: Processor Biggest, Memory Smallest 1000 elative BW 100 mprove ment 10 1 Memory Processor Network Disk (Latency improvement = Bandwidth improvement) 1 10 100 Relative Latency Improvement Performance Milestones Processor: 286, 386, 486, Pentium, Pentium Pro, Pentium 4 (21x,2250x) Ethernet: 10Mb, 100Mb, 1000Mb, 10000 Mb/s (16x,1000x) Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk : 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) (latency = simple operation w/o contention BW = best-case) Bandwidth Rocks (11)

Bandwidth Rocks (12) Annual Improvement per Technology Annual Bandwidth Improvement (all milestones) CPU DRAM LAN Disk 1.50 1.27 1.39 1.28 Annual Latency Improvement (all milestones) 1.17 1.07 1.12 1.11 Again, CPU fastest change, DRAM slowest But what about recent BW, Latency change? Annual Bandwidth Improvement (last 3 milestones) 1.55 1.30 1.78 1.29 Annual Latency Improvement (last 3 milestones) 1.22 1.06 1.13 1.09 How summarize BW vs. Latency change?

Towards a Rule of Thumb How long for Bandwidth to Double? Time for Bandwidth to Double (Years, all milestones) 1.7 2.9 2.1 2.8 How much does Latency Improve in that time? Latency Improvement in Time for Bandwidth to Double (all milestones) But what about recently? Time for Bandwidth to Double (Years, last 3 milestones) Latency Improvement in Time for Bandwidth to Double (last 3 milestones) 1.3 1.2 1.3 1.3 1.6 2.7 1.2 2.7 1.4 1.2 1.2 1.3 Despite faster LAN, all 1.2X to 1.4X Bandwidth Rocks (13)

Rule of Thumb for Latency Lagging BW In the time that bandwidth doubles, latency improves by no more than a factor of 1.2 to 1.4 (and capacity improves faster than bandwidth) Stated alternatively: Bandwidth improves by more than the square of the improvement in Latency Bandwidth Rocks (14)

What if Latency Didn t Lag BW? Life would have been simpler for designers if Latency had kept up with Bandwidth E.g., 0.1 nanosecond latency processor, 2 nanosecond latency memory, 3 microsecond latency LANs, 0.3 millisecond latency disks Why does it Lag? Bandwidth Rocks (15)

6 Reasons Latency Lags Bandwidth 1. Moore s Law helps BW more than latency Faster transistors, more transistors, more pins help Bandwidth MPU Transistors: 0.130 vs. 42 M xtors (300X) DRAM Transistors: 0.064 vs. 256 M xtors (4000X) MPU Pins: 68 vs. 423 pins (6X) DRAM Pins: 16 vs. 66 pins (4X) Smaller, faster transistors but communicate over (relatively) longer lines: limits latency Feature size: 1.5 to 3 vs. 0.18 micron (8X,17X) MPU Die Size: 35 vs. 204 mm 2 (ratio sqrt 2X) DRAM Die Size: 47 vs. 217 mm 2 (ratio sqrt 2X) Bandwidth Rocks (16)

6 Reasons Latency Lags Bandwidth (cont d) 2. Distance limits latency Size of DRAM block long bit and word lines most of DRAM access time Speed of light and computers on network 1. & 2. explains linear latency vs. square BW? 3. Bandwidth easier to sell ( bigger=better ) E.g., 10 Gbits/s Ethernet ( 10 Gig ) vs. 10 µsec latency Ethernet 4400 MB/s DIMM ( PC4400 ) vs. 50 ns latency Even if just marketing, customers now trained Since bandwidth sells, more resources thrown at bandwidth, which further tips the balance Bandwidth Rocks (17)

6 Reasons Latency Lags Bandwidth (cont d) 4. Latency helps BW, but not vice versa Spinning disk faster improves both bandwidth and rotational latency 3600 RPM 15000 RPM = 4.2X Average rotational latency: 8.3 ms 2.0 ms Things being equal, also helps BW by 4.2X Lower DRAM latency More access/second (higher bandwidth) Higher linear density helps disk BW (and capacity), but not disk Latency 9,550 BPI 533,000 BPI 60X in BW Bandwidth Rocks (18)

6 Reasons Latency Lags Bandwidth (cont d) 5. Bandwidth hurts latency Queues help Bandwidth, hurt Latency (Queuing Theory) Adding chips to widen a memory module increases Bandwidth but higher fan-out on address lines may increase Latency 6. Operating System overhead hurts Latency more than Bandwidth Long messages amortize overhead; overhead bigger part of short messages Bandwidth Rocks (19)

3 Ways to Cope with Latency Lags Bandwidth If a problem has no solution, it may not be a problem, but a fact--not to be solved, but to be coped with over time Shimon Peres ( Peres s Law ) 1. Caching (Leveraging Capacity) Processor caches, file cache, disk cache 2. Replication (Leveraging Capacity) Read from nearest head in RAID, from nearest site in content distribution 3. Prediction (Leveraging Bandwidth) Branches + Prefetching: disk, caches Bandwidth Rocks (20)

BW vs. Latency: MPU State of the art? Latency via caches Intel Itanium II has 4 caches on-chip! 2 Level 1 caches: 16 KB I and 16 KB D Level 2 cache: 256 KB Level 3 cache: 3072 KB 211M transistors ~85% for caches Die size 421 mm 2 130 Watts @ 1GHz 1% die to change data, 99% to move, store data? L1 I$ L2 $ L1 D$ L3 Tag Bus control L3 $ Bandwidth Rocks (21)

HW BW Example: Micro Massively Parallel Processor (µmmp) Intel 4004 (1971): 4-bit processor, 2312 transistors, 0.4 MHz, 10 micron PMOS, 11 mm 2 chip RISC II (1983): 32-bit, 5 stage pipeline, 40,760 transistors, 3 MHz, 3 micron NMOS, 60 mm 2 chip 4004 shrinks to ~ 1 mm 2 at 3 micron 250 mm 2 chip, 0.090 micron CMOS = 2312 RISC IIs + Icache + Dcache RISC II shrinks to ~ 0.05 mm 2 at 0.09 mi. Caches via DRAM or 1 transistor SRAM (www.t-ram.com) Proximity Communication via capacitive coupling at > 1 TB/s (Ivan Sutherland@Sun) Processor = new transistor? Cost of Ownership, Dependability, Security v. Cost/Perf. => µmpp Bandwidth Rocks (22)

SW Design Example: Planning for BW gains Goal: Dependable storage system keeps multiple replicas of data at remote sites Caching (obviously) to reducing latency Replication: multiple requests to multiple copies and just use the quickest reply Prefetching to reduce latency Large block sizes for disk and memory Protocol: few very large messages vs. chatty protocol with lots small messages Log-structured file sys. at each remote site Bandwidth Rocks (23)

Too Optimistic so Far (its even worse)? Optimistic: Cache, Replication, Prefetch get more popular to cope with imbalance Pessimistic: These 3 already fully deployed, so must find next set of tricks to cope; hard! Its even worse: bandwidth gains multiplied by replicated components parallelism simultaneous communication in switched LAN multiple disks in a disk array multiple memory modules in a large memory multiple processors in a cluster or SMP Bandwidth Rocks (24)

Conclusion: Latency Lags Bandwidth For disk, LAN, memory, and MPU, in the time that bandwidth doubles, latency improves by no more than 1.2X to 1.4X BW improves by square of latency improvement Innovations may yield one-time latency reduction, but unrelenting BW improvement If everything improves at the same rate, then nothing really changes When rates vary, require real innovation HW and SW developers should innovate assuming Latency Lags Bandwidth Bandwidth Rocks (25)