Instruction Level Parallelism I: Pipelining

Size: px
Start display at page:

Download "Instruction Level Parallelism I: Pipelining"

Transcription

1 Instruction Level Parallelism I: Pipelining Readings: H&P Appendix A Instruction Level Parallelism I: Pipelining 1

2 This Unit: Pipelining Application OS Compiler Firmware CPU I/O Memory Digital Circuits Gates & Transistors Basic Pipelining Single, in-order issue Clock rate vs. IPC Data Hazards Hardware: stalling and bypassing Software: pipeline scheduling Control Hazards Branch prediction Precise state Instruction Level Parallelism I: Pipelining 2

3 Quick Review Single-cycle Multi-cycle insn0.fetch insn0.fetch, dec, exec insn0.dec insn0.exec insn1.fetch insn1.fetch, dec, exec insn1.dec Basic datapath: fetch, decode, execute Single-cycle control: hardwired + Low CPI (1) Long clock period (to accommodate slowest instruction) Multi-cycle control: micro-programmed + Short clock period High CPI Can we have both low CPI and short clock period? Not if datapath executes only one instruction at a time No good way to make a single instruction go faster insn1.exec Instruction Level Parallelism I: Pipelining 3

4 Pipelining Multi-cycle insn0.fetch insn0.dec insn0.exec insn1.fetch insn1.dec insn1.exec Pipelined insn0.fetch insn0.dec insn1.fetch insn0.exec insn1.dec insn1.exec Important performance technique Improves instruction throughput rather instruction latency Begin with multi-cycle design When instruction advances from stage 1 to 2 Allow next instruction to enter stage 1 Form of parallelism: insn-stage parallelism Individual instruction takes the same number of stages + But instructions enter and leave at a much faster rate Automotive assembly line analogy Instruction Level Parallelism I: Pipelining 4

5 5 Stage Pipelined Datapath PC PC + 4 O PC I$ Register File s1 s2 d A B O B D$ D Temporary values (PC,,A,B,O,D) re-latched every stage Why? 5 insns may be in pipeline at once, they share a single PC? Notice, PC not latched after ALU stage (why not?) Instruction Level Parallelism I: Pipelining 5

6 Pipeline Terminology PC PC + 4 O PC I$ PC Register File s1 s2 d A B O B D$ D F/D D/X X/M M/W Five stage: Fetch, Decode, execute, Memory, Writeback Nothing magical about the number 5 (Pentium 4 has 22 stages) Latches (pipeline registers) named by stages they separate PC, F/D, D/X, X/M, M/W Instruction Level Parallelism I: Pipelining 6

7 Pipeline Control PC PC + 4 O PC I$ Register File s1 s2 d A B O B D$ D xc mc wc CTRL mc wc wc One single-cycle controller, but pipeline the control signals Instruction Level Parallelism I: Pipelining 7

8 Abstract Pipeline regfile I$ D$ + 4 PC F/D D/X X/M M/W This is an integer pipeline Execution stages are X,M,W Usually also one or more floating-point (FP) pipelines Separate FP register file One pipeline per functional unit: E+, E*, E/ Pipeline : functional unit need not be pipelined (e.g, E/) Execution stages are E+,E+,W (no M) Instruction Level Parallelism I: Pipelining 8

9 Floating Point Pipelines I-regfile I$ D$ + 4 E* E* E* E + E + E/ F-regfile Instruction Level Parallelism I: Pipelining 9

10 Pipeline Diagram add r3,r2,r1 F D X M W ld r4,0(r5) F D X M W st r6,4(r7) F D X M W Pipeline diagram Cycles across, insns down Convention: X means ld r4,0(r5) finishes execute stage and writes into X/M latch at end of cycle 4 Reverse stream analogy Downstream : earlier stages, younger insns Upstream : later stages, older insns Reverse? instruction stream fixed, pipeline flows over it Architects see instruction stream as fixed by program/compiler Instruction Level Parallelism I: Pipelining 10

11 Pipeline Performance Calculation Back of the envelope calculation Branch: 20%, load: 20%, store: 10%, other: 50% Single-cycle Clock period = 50ns, CPI = 1 Performance = 50ns/insn Pipelined Clock period = 12ns CPI = 1 (each insn takes 5 cycles, but 1 completes each cycle) Performance = 12ns/insn Instruction Level Parallelism I: Pipelining 11

12 Principles of Pipelining Let: insn execution require N cycles, each takes t n seconds L 1 (1-insn latency) = t n T (throughput) = 1/L 1 L M (M-insn latency, where M>>1) = M*L 1 Now: N-stage pipeline L 1+P = L 1 T +P = 1/max(t n ) N/L 1 If t n are equal (i.e., max(t n ) = L 1 /N), throughput = N/L 1 L M+P = M*max(t n ) M*L 1 /N S +P (speedup) = [M*L 1 / ( M*L 1 /N)] = N Q: for arbitrarily high speedup, use arbitrarily high N? Instruction Level Parallelism I: Pipelining 12

13 No, Part I: Pipeline Overhead Let: O be extra delay per pipeline stage Latch overhead: pipeline latches take time Clock/data skew Now: N-stage pipeline with overhead Assume max(t n ) = L 1 /N L 1+P+O = L 1 + N*O T +P+O = 1/(L 1 /N + O) = 1/(1/T +P + O) T +P, 1/O L M+P+O = M*L 1 /N + M*O = L M+P + M*O S +P+O = [M*L 1 / (M*L 1 /N + M*O)+ = N = S +P, L 1 /O O limits throughput and speedup useful N Instruction Level Parallelism I: Pipelining 13

14 No, Part II: Hazards Dependence: relationship that serializes two insns Structural: two insns want to use same structure Data: two insns use same storage location Control: one instruction affects whether another executes at all Hazard: dependence and both insns in pipeline together Possibility for getting order wrong Often fixed with stalls: insn stays in same stage for multiple cycles Let: H be average number of hazard stall cycles per instruction L 1+P+H = L 1+P (no hazards for one instruction) T +P+H = [N/(N+H)]*N/L 1 = [N/(N+H)] * T +P L M+P+H = M* L 1 /N * [(N+H)/N] = [(N+H)/N] * L M+P S +P+H = M*L 1 / M*L 1 /N*[(N+H)/N] = [N/(N+H)]*S +P H also limit throughput, speedup useful N N H (more insns in flight dependences become hazards) Exact H depends on program, requires detailed simulation Instruction Level Parallelism I: Pipelining 14

15 Clock Rate vs. IPC Deeper pipeline (bigger N) + frequency IPC Ultimate metric is IPC * frequency But people buy frequency, not IPC * frequency Trend has been for deeper pipelines Intel example: 486: 5 stages (50+ gate delays / clock) Pentium: 7 stages Pentium II/III: 12 stages Pentium 4: 22 stages (10 gate delays / clock) 800 MHz Pentium III was faster than 1 GHz Pentium4 No more Ghz, fewer stages: Core 1/2 = 14 stages, Core i7 16 stages Instruction Level Parallelism I: Pipelining 15

16 Optimizing Pipeline Depth Parameterize clock cycle in terms of gate delays G gate delays to process (fetch, decode, execute) a single insn O gate delays overhead per stage X average stall per instruction per stage Simplistic: real X function much, much more complex Compute optimal N (pipeline stages) given G,O,X IPC = 1/(CPI ideal + CPI stall )=1/ (1 + X * N) f = 1/t n =1 / (G / N + O) Example: G = 80, O = 1, X = 0.16, N IPC = 1/(1+0.16*N) freq=1/(80/n+1) IPC*freq Instruction Level Parallelism I: Pipelining 16

17 Managing a Pipeline Proper flow requires two pipeline operations Mess with latch write-enable and clear signals to achieve Operation I: stall Effect: stops some insns in their current stages Use: make younger insns wait for older ones to complete Implementation: de-assert write-enable Operation II: flush Effect: removes insns from current stages Use: see later Implementation: assert clear signals Both stall and flush must be propagated to younger insns Instruction Level Parallelism I: Pipelining 17

18 Structural Hazards ld r2,0(r1) F D X M W add r1,r3,r4 F D X M W st r6,0(r1) F D X M W sub r1,r3,r5 F D X M W Structural hazard: resource needed twice in one cycle Example: shared I/D$ Instruction Level Parallelism I: Pipelining 18

19 Fixing Structural Hazards ld r2,0(r1) F D X M W add r1,r3,r4 F D X M W st r6,0(r1) F D X M W sub r1,r3,r5 s* F D X M W Can fix structural hazards by stalling s* = structural stall Q: which one to stall: ld or sub? Always safe to stall younger instruction (here sub) Fetch stall logic: (D/X.op == ld D/X.op == st) But not always the best thing to do performance wise (?) + Low cost, simple Decreases IPC Upshot: better to avoid by design, then to fix Instruction Level Parallelism I: Pipelining 19

20 Avoiding Structural Hazards Replicate the contended resource + No IPC degradation Increased area, power, latency (interconnect delay?) For cheap, divisible, or highly contended resources (e.g, I$/D$) Pipeline the contended resource + No IPC degradation, low area, power overheads Sometimes tricky to implement (e.g., for RAMs) For multi-cycle resources (e.g., multiplier) Design ISA/pipeline to reduce structural hazards (RISC) Each insn uses a resource always for one cycle And at most once Always in same pipe stage Reason why integer operations forced to go through M stage Instruction Level Parallelism I: Pipelining 20

21 Why Integer Operations Take 5 Cycles? + 4 PC PC << 2 PC PC Insn Mem Register File s1 s2 d Could/should we allow add to skip M and go to W? No It wouldn t help: peak fetch still only 1 insn per cycle Structural hazards: imagine add follows lw Instruction Level Parallelism I: Pipelining 21 A B S X O B a Data Mem F/D D/X X/M M/W add $3,$2,$1 lw $4,0($5) d O D

22 Data Hazards Real insn sequences pass values via registers/memory Three kinds of data dependences (where s the fourth?) add r2,r3 r1 sub r1,r4 r2 or r6,r3 r1 Read-after-write (RAW) True-dependence add r2,r3 r1 sub r5,r4 r2 or r6,r3 r1 Write-after-read (WAR) Anti-dependence add r2,r3 r1 sub r1,r4 r2 or r6,r3 r1 Write-after-write (WAW) Output-dependence Only one dependence between any two insns (RAW has priority) Data hazards: function of data dependences and pipeline Potential for executing dependent insns in wrong order Require both insns to be in pipeline ( in flight ) simultaneously Instruction Level Parallelism I: Pipelining 22

23 Dependences and Loops Data dependences in loops Intra-loop: within same iteration Inter-loop: across iterations Example: DAXPY (Double precision A X Plus Y) for (i=0;i<100;i++) Z[i]=A*X[i]+Y[i]; 0: ldf X(r1) f2 1: mulf f0,f2 f4 2: ldf Y(r1) f6 3: addf f6,f4 f8 4: stf f8 Z(r1) 5: addi 8,r1 r1 6: slti r1,800 r2 7: beq r2,loop RAW intra: 0 1(f2), 1 3(f4), 2 3(f6), 3 4(f8), 5 6(r1), 6 7(r2) RAW inter: 5 0(r1), 5 2(r1), 5 4(r1), 5 5(r1) WAR intra: 0 5(r1), 2 5(r1), 4 5(r1) WAR inter: 1 0(f2), 3 1(f4), 3 2(f6), 4 3(f8), 6 5(r1), 7 6(r2) WAW intra: none WAW inter: 0 0(f2), 1 1(f4), 2 2(f6), 3 3(f8), 6 6(r2) Instruction Level Parallelism I: Pipelining 23

24 RAW Read-after-write (RAW) add r2,r3 r1 sub r1,r4 r2 or r6,r3 r1 Problem: swap would mean sub uses wrong value for r1 True: value flows through this dependence Using different output register for add doesn t help Instruction Level Parallelism I: Pipelining 24

25 RAW: Detect and Stall regfile I$ D$ + 4 PC F/D D/X X/M M/W Stall logic: detect and stall reader in D (F/D..rs1 & (F/D..rs1==D/X..rd F/D..rs1==X/M..rd F/D..rs1==M/W..rd)) (F/D..rs2 & (F/D..rs2==D/X..rd F/D..rs2==X/M..rd F/D..rs2==M/W..rd)) Re-evaluated every cycle until no longer true + Low cost, simple IPC degradation, dependences are the common case Instruction Level Parallelism I: Pipelining 25

26 Two Stall Timings (without bypassing) Depend on how D and W stages share regfile Each gets regfile for half a cycle 1st half D reads, 2nd half W writes 3 cycle stall d* = data stall, p* = propagated stall add r2,r3 r1 F D X M W sub r1,r4 r2 F d* d* d* D X M W add r5,r6 r7 p* p* p* F D X M W + 1st half W writes, 2nd half D reads 2 cycle stall How does the stall logic change here? add r2,r3 r1 F D X M W sub r1,r4 r2 F d* d* D X M W add r5,r6 r7 p* p* F D X M W Instruction Level Parallelism I: Pipelining 26

27 Reducing RAW Stalls with Bypassing regfile D$ D/X X/M M/W Why wait until W stage? Data available after X or M stage Bypass (aka forward) data directly to input of X or M MX: from beginning of M (X output) to input of X WX: from beginning of W (M output) to input of X WM: from beginning of W (M output) to data input of M Two each of MX, WX (figure shows 1) + WM = full bypassing + Reduces stalls in a big way Additional wires and muxes may increase clock cycle Instruction Level Parallelism I: Pipelining 27

28 Bypass Logic regfile D$ D/X X/M M/W Bypass logic: similar to but separate from stall logic Stall logic controls latches, bypass logic controls mux inputs Complement one another: can t bypass must stall ALU input mux bypass logic (D/X..rs2 & X/M.rd==D/X..rs2) 2 // check first (D/X..rs2 & M/W.rd==D/X..rs2) 1 // check second (D/X..rs2) 0 // check last Instruction Level Parallelism I: Pipelining 28

29 Pipeline Diagrams with Bypassing If bypass exists, from / to stages execute in same cycle Example: full bypassing, use MX bypass add r2,r3 r1 F D X M W sub r1,r4 r2 F D X M W Example: full bypassing, use WX bypass add r2,r3 r1 F D X M W ld [r7] r5 F D X M W sub r1,r4 r2 F D X M W Example: WM bypass add r2,r3 r1 F D X M W? F D X M W Can you think of a code example that uses the WM bypass? Instruction Level Parallelism I: Pipelining 29

30 Have We Prevented All Data Hazards? Register File s1 s2 d A B S X O B a Data Mem F/D D/X X/M M/W nop d O D stall add $2,$3 $4 Lw 4($2) $3 No. Consider a load followed by a dependent add insn Bypassing alone isn t sufficient Solution? Detect this, and then stall the add by one cycle Instruction Level Parallelism I: Pipelining 30

31 Stalling to Avoid Data Hazards Register File s1 s2 d F/D D/X X/M nop A B S X O B a Data Mem d M/W O D hazard Prevent F/D insn from reading (advancing) this cycle Write nop into D/X. (effectively, insert nop in hardware) Also reset (clear) the datapath control signals Disable F/D latch and PC write enables (why?) Re-evaluate situation next cycle Instruction Level Parallelism I: Pipelining 31

32 Stalling on Load-To-Use Dependences Register File s1 s2 d A B S X a Data Mem F/D D/X X/M M/W nop O B d O D stall add $4,$2,$3 lw $3,4($2) Stall = (D/X..op == ld) && ((F/D..rs1 == D/X..rd) ((F/D..rs2 == D/X..rd) && (F/D..OP!= st)) Instruction Level Parallelism I: Pipelining 32

33 Stalling on Load-To-Use Dependences Register File s1 s2 d A B S X a Data Mem F/D D/X X/M M/W nop O B d O D stall add $4,$2,$3 (stall bubble) lw $3,4($2) Stall = (D/X..op == ld) && ((F/D..rs1 == D/X..rd) ((F/D..rs2 == D/X..rd) && (F/D..OP!= st)) Instruction Level Parallelism I: Pipelining 33

34 Stalling on Load-To-Use Dependences Register File s1 s2 d A B S X O B a Data Mem F/D D/X X/M M/W nop d O D stall add $2,$3 $4 (stall bubble) Stall = (D/X..op == ld) && ((F/D..rs1 == D/X..rd) ((F/D..rs2 == D/X..rd) && (F/D..OP!= st)) lw $3, Instruction Level Parallelism I: Pipelining 34

35 Load-Use Stalls Even with full bypassing, stall logic is unavoidable Load-use stall Load value not ready at beginning of M can t use MX bypass Use WX bypass ld [r3+4] r1 F D X M W sub r1,r4 r2 F D d* X M W Aside: with WX bypassing, stall logic can be in D or X ld [r3+4] r1 F D X M W sub r1,r4 r2 F d* D X M W Aside II: how does stall/bypass logic handle cache misses? Instruction Level Parallelism I: Pipelining 35

36 Performance Impact of Load/Use Penalty Assume Branch: 20%, load: 20%, store: 10%, other: 50% 50% of loads are followed by dependent instruction require 1 cycle stall (I.e., insertion of 1 nop) Calculate CPI CPI = 1 + (1 * 20% * 50%) = 1.1 Instruction Level Parallelism I: Pipelining 36

37 Compiler Scheduling Compiler can schedule (move) insns to reduce stalls Basic pipeline scheduling: eliminate back-to-back load-use pairs Example code sequence: a = b + c; d = f e; MIPS Notation: (NO MORE ) ld r2,4(sp) is ld *sp+4+ r2 st r1, 0(sp) is st r1 [sp+0+ Before ld r2,4(sp) ld r3,8(sp) add r1,r3,r2 //stall st r1,0(sp) ld r5,16(sp) ld r6,20(sp) sub r4,r5,r6 //stall st r4,12(sp) After ld r2,4(sp) ld r3,8(sp) ld r5,16(sp) add r1,r3,r2 //no stall ld r6,20(sp) st r1,0(sp) sub r5,r5,r6 //no stall st r4,12(sp) Instruction Level Parallelism I: Pipelining 37

38 Compiler Scheduling Requires Large scheduling scope Independent instruction to put between load-use pairs + Original example: large scope, two independent computations This example: small scope, one computation Before ld r2,4(sp) ld r3,8(sp) add r1,r3,r2 //stall st r1,0(sp) After ld r2,4(sp) ld r3,8(sp) add r1,r3,r2 //stall st r1,0(sp) Instruction Level Parallelism I: Pipelining 38

39 Compiler Scheduling Requires Enough registers To hold additional live values Example code contains 7 different values (including sp) Before: max 3 values live at any time 3 registers enough After: max 4 values live 3 registers not enough WAR violations Original ld r2,4(sp) ld r1,8(sp) add r1,r1,r2 //stall st r1,0(sp) ld r2,16(sp) ld r1,20(sp) sub r1,r2,r1 //stall st r1,12(sp) Wrong! ld r2,4(sp) ld r1,8(sp) ld r2,16(sp) add r1,r1,r2 //WAR ld r1,20(sp) st r1,0(sp) //WAR sub r1,r2,r1 st r1,12(sp) Instruction Level Parallelism I: Pipelining 39

40 Compiler Scheduling Requires Alias analysis Ability to tell whether load/store reference same memory locations Effectively, whether load/store can be rearranged Example code: easy, all loads/stores use same base register (sp) New example: can compiler tell that r8 = sp? Before ld r2,4(sp) ld r3,8(sp) add r1,r3,r2 //stall st r1,0(sp) ld r5,0(r8) ld r6,4(r8) sub r4,r5,r6 //stall st r4,8(r8) Wrong(?) ld r2,4(sp) ld r3,8(sp) ld r5,0(r8) add r1,r3,r2 ld r6,4(r8) st r1,0(sp) sub r4,r5,r6 st r4,8(r8) Instruction Level Parallelism I: Pipelining 40

41 WAW Hazards Write-after-write (WAW) add r1,r2,r3 sub r2,r1,r4 or r1,r6,r3 Compiler effects Scheduling problem: reordering would leave wrong value in r1 Later instruction reading r1 would get wrong value Artificial: no value flows through dependence Eliminate using different output register name for or Pipeline effects Doesn t affect in-order pipeline with single-cycle operations Another reason for making ALU operations go through M stage Can happen with multi-cycle operations (e.g., FP, some int. insn or cache misses) Instruction Level Parallelism I: Pipelining 41

42 Pipelining and Multi-Cycle Operations Register File A B s1 s2 d F/D D/X B X/M O a Data Mem d O D What if you wanted to add a multi-cycle operation? E.g., 4-cycle multiply P/W: separate output latch connects to W stage Controlled by pipeline control and multiplier FSM X Xctrl P P/W Instruction Level Parallelism I: Pipelining 42

43 A Pipelined Multiplier Register File A B s1 s2 d F/D D/X B X/M O a Data Mem d O D P P P P M M M M P0/P1 P1/P2 P2/P3 P3/W Multiplier itself is often pipelined, what does this mean? Product/multiplicand register/alus/latches replicated Can start different multiply operations in consecutive cycles Instruction Level Parallelism I: Pipelining 43

44 Aside: Pipelined Functional Units Almost all multi-cycle functional units are pipelined Each operation takes N cycles But can start initiate a new (independent) operation every cycle Requires internal latching and some hardware replication + A cheaper way to improve throughput than multiple non-pipelined units mulf f0,f1,f2 F D E* E* E* E* W mulf f3,f4,f5 F D E* E* E* E* W One exception: int/fp divide: difficult to pipeline and not worth it divf f0,f1,f2 F D E/ E/ E/ E/ W divf f3,f4,f5 F D s* s* s* E/ E/ E/ E/ W s* = structural hazard, two insns need same structure ISAs and pipelines designed to have few of these Canonical example: all insns forced to go through M stage Instruction Level Parallelism I: Pipelining 44

45 Pipeline Diagram with Int. Multiplier mul $4,$3,$5 F D P0 P1 P2 P3 W addi $6,$4,1 F D d* d* d* X M W What about Two instructions could try to write regfile in same cycle? Structural hazard! Must prevent: mul $4,$3,$5 F D P0 P1 P2 P3 W addi $6,$1,1 F D X M W add $5,$6,$10 F D X M W Instruction Level Parallelism I: Pipelining 45

46 More Multiplier Nasties What about Mis-ordered writes to the same register Software thinks add gets $4 from addi, actually gets it from mul mul $4,$3,$5 F D P0 P1 P2 P3 W addi $4,$1,1 F D X M W add $10,$4,$6 F D X M W Multi-cycle operations introduces WAW hazard on simple pipelines Not necessarily supplementary FP pipeline Instruction Level Parallelism I: Pipelining 46

47 Handling WAW Hazards div f0,f1 f2 F D E/ E/ E/ E/ E/ W stf f2 [r1] F D d* d* d* X M W addf f0,f1 f2 F D E+ E+ W What to do? Option I: stall younger instruction (addf) at writeback + Intuitive, simple Lower performance, cascading W structural hazards Option II: cancel older instruction (divf) writeback + No performance loss What if divf or stf cause an exception (e.g., /0, page fault)? Instruction Level Parallelism I: Pipelining 47

48 Handling Interrupts/Exceptions How are interrupts/exceptions handled in a pipeline? Interrupt: external, e.g., timer, I/O device requests Exception: internal, e.g., /0, page fault, illegal instruction We care about restartable interrupts (e.g. stf page fault) divf f0,f1 f2 F D E/ E/ E/ E/ E/ W stf f2 [r1] F D d* d* d* X M W addf f0,f1 f2 F D E+ E+ W VonNeumann says Insn execution should appear sequential and atomic Insn X should complete before instruction X+1 should begin + Doesn t physically have to be this way (e.g., pipeline) But be ready to restore to this state at a moments notice Called precise state or precise interrupts Instruction Level Parallelism I: Pipelining 48

49 Handling Interrupts divf f0,f1 f2 F D E/ E/ E/ E/ E/ W stf f2 [r1] F D d* d* d* X M W addf f0,f1 f2 F D E+ E+ W In this situation Make it appear as if divf finished and stf, addf haven t started Allow divf to writeback Flush stf and addf (so that s what a flush is for) But addf has already written back Keep an undo register file? Complicated Force in-order writebacks? Slow Invoke exception handler Restart stf Instruction Level Parallelism I: Pipelining 49

50 More Interrupt Nastiness divf f0,f1 f2 F D E/ E/ E/ E/ E/ W stf f2 [r1] F D d* d* d* X M W divf f0,f4 f2 F D E/ E/ E/ E/ E/ W What about two simultaneous in-flight interrupts Example: stf page fault, divf /0 Interrupts must be handled in program order (stf first) Handler for stf must see program as if divf hasn t started Must defer interrupts until writeback and force in-order writeback Kind of a bogus example, /0 is non-restartable In general: interrupts are really nasty Some processors (Alpha) only implement precise integer interrupts Easier because fewer WAW scenarios Most floating-point interrupts are non-restartable anyway Instruction Level Parallelism I: Pipelining 50

51 WAR Hazards Write-after-read (WAR) add r1,r2,r3 sub r2,r5,r4 or r6,r3,r1 Compiler effects Scheduling problem: reordering would mean add uses wrong value for r2 Artificial: solve using different output register name for sub Pipeline effects Can t happen in simple in-order pipeline Can happen with out-of-order execution Instruction Level Parallelism I: Pipelining 51

52 Memory Data Hazards So far, have seen/dealt with register dependences Dependences also exist through memory st r2 [r1] ld [r1] r4 st r5 [r1] Read-after-write (RAW) st r2 [r1] ld [r1] r4 st r5 [r1] Write-after-read (WAR) st r2 [r1] ld [r1] r4 st r5 [r1] Write-after-write (WAW) But in an in-order pipeline like ours, they do not become hazards Memory read and write happen at the same stage Register read happens three stages earlier than register write In general: memory dependences more difficult than register st r2 [r1] F D X M W ld [r1] r4 F D X M W Instruction Level Parallelism I: Pipelining 52

53 Control hazards PC + 4 Insn Mem PC F/D Register File s1 s2 d PC A B D/X S X << 2 O B X/M What About Branches? Could just stall to wait for branch outcome (two-cycle penalty) Fetch past branch insns before branch outcome is known Default: assume not-taken (at fetch, can t tell it s a branch) Instruction Level Parallelism I: Pipelining 53

54 Branch Recovery PC + 4 Insn Mem PC F/D Register File s1 s2 d PC A B D/X S X << 2 O B X/M nop nop Branch recovery: what to do when branch is actually taken Insns that will be written into F/D and D/X are wrong Flush them, i.e., replace them with nops + They haven t had written permanent state yet (regfile, DMem) Two cycle penalty for taken branches Instruction Level Parallelism I: Pipelining 54

55 Branch Performance Back of the envelope calculation Branch: 20%, load: 20%, store: 10%, other: 50% Say, 75% of branches are taken CPI = % * 75% * 2 = * 0.75 * 2 = 1.3 Branches cause 30% slowdown Even worse with deeper pipelines How do we reduce this penalty? Instruction Level Parallelism I: Pipelining 55

56 Big Idea: Speculation Speculation Engagement in risky transactions on the chance of profit Speculative execution Execute before all parameters known with certainty Correct speculation + Avoid stall, improve performance Incorrect speculation (mis-speculation) Must abort/flush/squash incorrect instructions Must undo incorrect changes (recover pre-speculation state) The game : [% correct * gain] [(1 % correct ) * penalty] Instruction Level Parallelism I: Pipelining 56

57 Control Hazards: Control Speculation Deal with control hazards with control speculation Unknown parameter: are these the correct insns to execute next? Mechanics Guess branch target, start fetching at guessed position Execute branch to verify (check) guess Correct speculation? keep going Mis-speculation? Flush mis-speculated insns Don t write registers or memory until prediction verified Speculation game for in-order 5 stage pipeline Gain = 2 cycles Penalty = 0 cycles No penalty mis-speculation no worse than stalling % correct = branch prediction Static (compiler) OK, dynamic (hardware) much better Instruction Level Parallelism I: Pipelining 57

58 Control Speculation and Recovery addi r1,1 r3 Correct: F D X M W bnez r3,targ F D X M W st r6 [r7+4] F D X M W targ:add r4,r5 r4 F D X M W Mis-speculation recovery: what to do on wrong guess Not too painful in an in-order pipeline Branch resolves in X speculative + Younger insns (in F, D) haven t changed permanent state Flush insns currently in F/D and D/X (i.e., replace with nops) Recovery: addi r1,1 r3 F D X M W bnez r3,targ F D X M W st r6 [r7+4] F D targ:add r4,r5 r4 F targ:add r4,r5 r4 F D X M W Instruction Level Parallelism I: Pipelining 58

59 Reducing Penalty: Fast Branches Fast branch: targets control-hazard penalty Basically, branch insns that can resolve at D, not X Test must be comparison to zero or equality, no time for ALU + New taken branch penalty is 1 Additional comparison insns (e.g., slt) for complex tests Bypass logic should bypass into decode stage now, too (more mux, longer wires) Additional nastiness with exceptions bnez r3,targ F D X M W targ:add r4,r5,r4 F D X M W Instruction Level Parallelism I: Pipelining 59

60 Fast Branch Performance Assume: Branch: 20%, 75% of branches are taken CPI = % * 75% * 1 = *0.75*1 = % slowdown (better than the 30% from before) But wait, fast branches assume only simple comparisons Fine for MIPS But not fine for ISAs with branch if $1 > $2 operations In such cases, say 25% of branches require an extra insn CPI = 1 + (20% * 75% * 1) + 20%*25%*1(extra insn) = 1.2 Example of ISA and micro-architecture interaction Type of branch instructions Another option: Delayed branch or branch delay slot What about condition codes? Instruction Level Parallelism I: Pipelining 60

61 Fewer Mispredictions: Branch Prediction PC + 4 BP Insn Mem nop TG PC Register File s1 s2 d Dynamic branch prediction: Hardware guesses outcome Start fetching from guessed address Instruction Level Parallelism I: Pipelining 61 TG PC F/D D/X X/M nop A B <> S X << 2 O B

62 Branch Prediction Performance Parameters Branch: 20%, load: 20%, store: 10%, other: 50% 75% of branches are taken Dynamic branch prediction Branches predicted with 95% accuracy CPI = % * 5% * 2 = 1.02 Instruction Level Parallelism I: Pipelining 62

63 Dynamic Branch Prediction regfile I$ B P D$ Step #1: is it a branch? Easy after decode... Step #2: is the branch taken or not taken? Direction predictor (applies to conditional branches only) Predicts taken/not-taken Step #3: if the branch is taken, where does it go? Easy after decode Instruction Level Parallelism I: Pipelining 63

64 Branch Direction Prediction Learn from past, predict the future Record the past in a hardware structure Direction predictor (DP) Map conditional-branch PC to taken/not-taken (T/N) decision Individual conditional branches often unbiased or weakly biased 90%+ one way or the other considered biased Why? Loop back edges, checking for uncommon conditions Branch history table (BHT): simplest predictor PC indexes table of bits (0 = N, 1 = T), no tags Essentially: branch will go same way it went last time PC [31:10] [9:2] 1:0 What about aliasing? Two PC with the same lower bits? BHT T or NT T or NT Prediction (taken or not taken) Instruction Level Parallelism I: Pipelining 64

65 Branch History Table (BHT) Branch history table (BHT): simplest direction predictor PC indexes table of bits (0 = N, 1 = T), no tags Essentially: branch will go same way it went last time Problem: consider inner loop branch below (* = mis-prediction) for (i=0;i<100;i++) for (j=0;j<3;j++) // whatever State/prediction N* T T T* N* T T T* N* T T T* Outcome T T T N T T T N T T T N Two built-in mis-predictions per inner loop iteration Branch predictor changes its mind too quickly Instruction Level Parallelism I: Pipelining 65

66 Two-Bit Saturating Counters (2bc) Two-bit saturating counters (2bc) [Smith] Replace each single-bit prediction (0,1,2,3) = (N,n,t,T) Adds hysteresis Force predictor to mis-predict twice before changing its mind State/prediction N* n* t T* t T T T* t T T T* Outcome T T T N T T T N T T T N One mispredict each loop execution (rather than two) + Fixes this pathology Can we do even better? Instruction Level Parallelism I: Pipelining 66

67 Aside: Two different alternatives Saturation Counter T Hysteresis Counter T t T T t T T N N T N N T N N T n N N T n N N Instruction Level Parallelism I: Pipelining 67

68 Correlated Predictor Correlated (two-level) predictor [Patt, MICRO 1994] Exploits observation that branch outcomes are correlated Maintains separate prediction per (PC, BHR) Branch history register (BHR): recent branch outcomes Simple working example: assume program has one branch (only one PC) BHT: one 1-bit DP entry BHT+2BHR: 2 2 = 4 1-bit DP entries State/prediction BHR=NN N* T T T T T T T T T T T active pattern BHR=NT N N* T T T T T T T T T T BHR=TN N N N N N* T T T T T T T BHR=TT N N N* T* N N N* T* N N N* T* Outcome N N T T T N T T T N T T T N We didn t make anything better, what s the problem? Instruction Level Parallelism I: Pipelining 68

69 Correlated Predictor What happened? BHR wasn t long enough to capture the pattern Try again: BHT+3BHR: 2 3 = 8 1-bit DP entries State/prediction BHR=NNN N* T T T T T T T T T T T BHR=NNT N N* T T T T T T T T T T BHR=NTN N N N N N N N N N N N N active pattern BHR=NTT N N N* T T T T T T T T T BHR=TNN N N N N N N N N N N N N BHR=TNT N N N N N N* T T T T T T BHR=TTN N N N N N* T T T T T T T BHR=TTT N N N N N N N N N N N N Outcome N N N T T T N T T T N T T T N + No mis-predictions after predictor learns all the relevant patterns Instruction Level Parallelism I: Pipelining 69

70 Correlated Predictor Design choice I: one global BHR or one per PC (local)? Each one captures different kinds of patterns Global is better, captures local patterns for tight loop branches Design choice II: how many history bits (BHR size)? Tricky one + Given unlimited resources, longer BHRs are better, but BHT utilization decreases Many history patterns are never seen Many branches are history independent (don t care) PC xor BHR allows multiple PCs to dynamically share BHT BHR length < log 2 (BHT size) Predictor takes longer to train Typical length: 8 12 Instruction Level Parallelism I: Pipelining 70

71 BHT BHT chooser Hybrid Predictor Hybrid (tournament) predictor [McFarling] Attacks correlated predictor BHT utilization problem Idea: combine two predictors Simple BHT predicts history independent branches Correlated predictor predicts only branches that need history Chooser assigns branches to one predictor or the other (and update) Branches start in simple BHT, move mis-prediction threshold + Correlated predictor can be made smaller, handles fewer branches % accuracy Alpha Hybrid Gshare with PC 2-bit saturating counter BHR Instruction Level Parallelism I: Pipelining 7

72 When to Perform Branch Prediction? During Decode Look at instruction opcode to determine branch instructions Can calculate next PC from instruction (for PC-relative branches) One cycle mis-fetch penalty even if branch predictor is correct bnez r3,targ F D X M W targ:add r4,r5,r4 F D X M W During Fetch? How do we do that? Instruction Level Parallelism I: Pipelining 72

73 Revisiting Branch Prediction Components I$ B P regfile D$ Step #1: is it a branch? Easy after decode... during fetch: predictor Step #2: is the branch taken or not taken? Direction predictor (as before) Step #3: if the branch is taken, where does it go? Branch target predictor (BTB) Supplies target PC if branch is taken Instruction Level Parallelism I: Pipelining 73

74 Branch Target Buffer (BTB) As before: learn from past, predict the future Record the past branch targets in a hardware structure Branch target buffer (BTB): guess the future PC based on past behavior Last time the branch X was taken, it went to address Y So, in the future, if address X is fetched, fetch address Y next Operation Like a cache: address = PC, data = target-pc Access at Fetch in parallel with instruction memory predicted-target = BTB[PC] Updated at X whenever target!= predicted-target BTB[PC] = target Aliasing? No problem, this is only a prediction (Really?) What happens with miss-speculation recovery? Instruction Level Parallelism I: Pipelining 74

75 Branch Target Buffer (continued) At Fetch, how does insn know that it s a branch & should read BTB? Answer: it doesn t have to, all insns read BTB Key idea: use BTB to predict which insn are branches Tag each entry (with bits of the PC) Just like a cache Tag hit signifies instruction at the PC is a branch Update only on taken branches (thus only taken branches in table) Access BTB at Fetch in parallel with instruction memory tag == PC BTB + 4 target predicted target Instruction Level Parallelism I: Pipelining 75

76 Both: partial tags and PC-relative target-pc Saving time in prediction is mandatory Broader alive branches for less BTB bits PC [31:10] [9:2] 1:0 [19:10] [13:2] [19:10] [13:2] = [31:13] [13:2] [9:2] 1:0 branch? target-pc Instruction Level Parallelism I: Pipelining 76

77 Why Does a BTB Work? Because most control insns use direct targets Target encoded in insn itself same target every time What about indirect targets? Target held in a register can be different each time Indirect conditional jumps are not widely supported Two indirect call idioms + Dynamically linked functions (DLLs): target always the same Dynamically dispatched (virtual) functions: hard but uncommon Also two indirect unconditional jump idioms Switches: hard but uncommon Function returns: hard and common Instruction Level Parallelism I: Pipelining 77

78 Return Address Stack (RAS) tag == PC BTB + 4 target predicted target RAS PD IMem Return address stack (RAS) Call instruction? RAS[TOS++] = PC+4 Return instruction? Predicted-target = RAS[--TOS] Q: how can you tell if an insn is a call/return before decoding it? Accessing RAS on every insn BTB-style doesn t work Answer: pre-decode bits in Imem, written when first executed Can also be used to signify branches Instruction Level Parallelism I: Pipelining 78

79 Putting It All Together BTB & branch direction predictor during fetch PC BTB + 4 tag target == predicted target RAS PD IMem is ret? BHT taken/not-taken If branch prediction correct, no taken branch penalty Instruction Level Parallelism I: Pipelining 79

80 Branch Prediction Performance Dynamic branch prediction Simple predictor at fetch; branches predicted with 75% accuracy CPI = 1 + (20% * 25% * 2)= 1.1 More advanced predictor at fetch: 95% accuracy CPI = 1 + (20% * 5% * 2) = 1.02 Branch mis-predictions still a big problem though Pipelines are long: typical mis-prediction penalty is 10+ cycles Pipelines are superscalar (later): typical mis-prediction penalty is huge Advanced branch predictors could jeopardize fech stage delay Solution I: Pipelining Branch prediction Solution II: Speculate using a fast predictor (Fech next insn) and in parallel predict using a slower BP. If both disagree, assume misprediction Prophet/Critic branch predictors Instruction Level Parallelism I: Pipelining 80

81 Avoiding Branches via ISA: Predication Conventional control Conditionally executed insns also conditionally fetched beq r3,targ F D X M W sub r5,r6,r1 F D flushed: wrong path targ:add r4,r5,r4 F flushed: why? targ:add r4,r5,r4 F D X M W If beq mis-predicts, both sub and add must be flushed Waste: add is independent of mis-prediction Predication: not prediction, predication ISA support for conditionally-executed unconditionally-fetched insns If beq mis-predicts, annul sub in place, preserve add Example is if-then, but if-then-else can be predicated too How is this done? How does add get correct value for r5? Instruction Level Parallelism I: Pipelining 81

82 Full Predication Full predication Every insn can be annulled, annulment controlled by Predicate registers: additional register in each insn (e.g., IA64) setp.eq r3,p3 F D X M W sub.p r5,r6,r1,p3 F D X annulled targ:add r4,r5,r4 F D X M W Predicate codes: condition bits in each insn (e.g., ARM) setcc r3 F D X M W sub.nz r5,r6,r1 F D X annulled targ:add r4,r5,r4 F D X M W Only ALU insn shown (sub), but this applies to all insns, even stores Branches replaced with set-predicate insns Instruction Level Parallelism I: Pipelining 82

83 Conditional Register Moves (CMOVs) Conditional (register) moves Construct appearance of full predication from one primitive cmoveq r1,r2,r3 // if (r1==0) r3=r2; May require some code duplication to achieve desired effect Painful, potentially impossible for some insn sequences Requires more registers Only good way of retro-fitting predication onto ISA (e.g., IA32, Alpha) sub r9,r6,1 F D X M W cmovne r3,r9,r5 F D X M W targ:add r4,r5,r4 F D X M W Instruction Level Parallelism I: Pipelining 83

84 Predication Performance Predication overhead is additional insns Sometimes overhead is zero Not-taken if-then branch: predicated insns executed Most of the times it isn t Taken if-then branch: all predicated insns annulled Any if-then-else branch: half of predicated insns annulled Almost all cases if using conditional moves Calculation for a given branch, predicate (vs speculate) if Average number of additional insns > overall mis-prediction penalty For an individual branch Mis-prediction penalty in a 5-stage pipeline = 2 Mis-prediction rate is <50%, and often <20% Overall mis-prediction penalty <1 and often <0.4 So when is predication worth it? Instruction Level Parallelism I: Pipelining 84

85 Predication Performance What does predication actually accomplish? In a scalar 5-stage pipeline (penalty = 2): nothing In a 4-way superscalar 15-stage pipeline (penalty = 60): something Use when mis-predictions >10% and insn overhead <6 In a 4-way out-of-order superscalar (penalty ~ 150) Should be used in more situations Still: only useful for branches that mis-predict frequently Strange: ARM typically uses scalar 5-9 stage pipelines Why is the ARM ISA predicated then? Low-power: eliminates the need for a complex branch predictor Real-time: predicated code performs consistently Loop scheduling: effective software pipelining requires predication Instruction Level Parallelism I: Pipelining 85

86 Research: Perceptron Predictor Perceptron predictor [Jimenez] Attacks BHR size problem using machine learning approach BHT replaced by table of function coefficients F i (signed) Predict taken if (BHR i *F i )> threshold + Table size #PC* BHR * F (can use long BHR: ~60 bits) Equivalent correlated predictor would be #PC*2 BHR How does it learn? Update F i when branch is taken BHR i == 1? F i ++ : F i ; don t care F i bits stay near 0, important F i bits saturate + Hybrid BHT/perceptron accuracy: 95 98% PC F F i *BHR i > thresh BHR Instruction Level Parallelism I: Pipelining 86

87 More Research: GEHL Predictor Problem with both correlated predictor and perceptron Same BHT real-estate dedicated to 1st history bit (1 column) as to 2nd, 3rd, 10th, 60th Not a good use of space: 1st bit much more important than 60th GEometric History-Length predictor *Seznec, ISCA 05+ Multiple BHTs, indexed by geometrically longer BHRs (0, 4, 16, 32) BHTs are (partially) tagged, not separate chooser Predict: use matching entry from BHT with longest BHR Mis-predict: create entry in BHT with longer BHR + Only 25% of BHT used for bits (not 50%) Helps amortize cost of tagging + Trains quickly 95-97% accurate Instruction Level Parallelism I: Pipelining 87

88 Acknowledgments Slides developed by Amir Roth of University of Pennsylvania with sources that included University of Wisconsin slides by Mark Hill, Guri Sohi, Jim Smith, and David Wood. Slides enhanced by Milo Martin and Mark Hill with sources that included Profs. Asanovic, Falsafi, Hoe, Lipasti, Shen, Smith, Sohi, Vijaykumar, and Wood Slides re-enhanced by V. Puente of University of Cantabria Memory Hierarchy I: Caches 88

WAR: Write After Read

WAR: Write After Read WAR: Write After Read write-after-read (WAR) = artificial (name) dependence add R1, R2, R3 sub R2, R4, R1 or R1, R6, R3 problem: add could use wrong value for R2 can t happen in vanilla pipeline (reads

More information

More on Pipelining and Pipelines in Real Machines CS 333 Fall 2006 Main Ideas Data Hazards RAW WAR WAW More pipeline stall reduction techniques Branch prediction» static» dynamic bimodal branch prediction

More information

INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER

INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER Course on: Advanced Computer Architectures INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER Prof. Cristina Silvano Politecnico di Milano cristina.silvano@polimi.it Prof. Silvano, Politecnico di Milano

More information

Q. Consider a dynamic instruction execution (an execution trace, in other words) that consists of repeats of code in this pattern:

Q. Consider a dynamic instruction execution (an execution trace, in other words) that consists of repeats of code in this pattern: Pipelining HW Q. Can a MIPS SW instruction executing in a simple 5-stage pipelined implementation have a data dependency hazard of any type resulting in a nop bubble? If so, show an example; if not, prove

More information

Solution: start more than one instruction in the same clock cycle CPI < 1 (or IPC > 1, Instructions per Cycle) Two approaches:

Solution: start more than one instruction in the same clock cycle CPI < 1 (or IPC > 1, Instructions per Cycle) Two approaches: Multiple-Issue Processors Pipelining can achieve CPI close to 1 Mechanisms for handling hazards Static or dynamic scheduling Static or dynamic branch handling Increase in transistor counts (Moore s Law):

More information

Static Scheduling. option #1: dynamic scheduling (by the hardware) option #2: static scheduling (by the compiler) ECE 252 / CPS 220 Lecture Notes

Static Scheduling. option #1: dynamic scheduling (by the hardware) option #2: static scheduling (by the compiler) ECE 252 / CPS 220 Lecture Notes basic pipeline: single, in-order issue first extension: multiple issue (superscalar) second extension: scheduling instructions for more ILP option #1: dynamic scheduling (by the hardware) option #2: static

More information

CS352H: Computer Systems Architecture

CS352H: Computer Systems Architecture CS352H: Computer Systems Architecture Topic 9: MIPS Pipeline - Hazards October 1, 2009 University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell Data Hazards in ALU Instructions

More information

CS:APP Chapter 4 Computer Architecture. Wrap-Up. William J. Taffe Plymouth State University. using the slides of

CS:APP Chapter 4 Computer Architecture. Wrap-Up. William J. Taffe Plymouth State University. using the slides of CS:APP Chapter 4 Computer Architecture Wrap-Up William J. Taffe Plymouth State University using the slides of Randal E. Bryant Carnegie Mellon University Overview Wrap-Up of PIPE Design Performance analysis

More information

Lecture: Pipelining Extensions. Topics: control hazards, multi-cycle instructions, pipelining equations

Lecture: Pipelining Extensions. Topics: control hazards, multi-cycle instructions, pipelining equations Lecture: Pipelining Extensions Topics: control hazards, multi-cycle instructions, pipelining equations 1 Problem 6 Show the instruction occupying each stage in each cycle (with bypassing) if I1 is R1+R2

More information

This Unit: Multithreading (MT) CIS 501 Computer Architecture. Performance And Utilization. Readings

This Unit: Multithreading (MT) CIS 501 Computer Architecture. Performance And Utilization. Readings This Unit: Multithreading (MT) CIS 501 Computer Architecture Unit 10: Hardware Multithreading Application OS Compiler Firmware CU I/O Memory Digital Circuits Gates & Transistors Why multithreading (MT)?

More information

Pipelining Review and Its Limitations

Pipelining Review and Its Limitations Pipelining Review and Its Limitations Yuri Baida yuri.baida@gmail.com yuriy.v.baida@intel.com October 16, 2010 Moscow Institute of Physics and Technology Agenda Review Instruction set architecture Basic

More information

VLIW Processors. VLIW Processors

VLIW Processors. VLIW Processors 1 VLIW Processors VLIW ( very long instruction word ) processors instructions are scheduled by the compiler a fixed number of operations are formatted as one big instruction (called a bundle) usually LIW

More information

Giving credit where credit is due

Giving credit where credit is due CSCE 230J Computer Organization Processor Architecture VI: Wrap-Up Dr. Steve Goddard goddard@cse.unl.edu http://cse.unl.edu/~goddard/courses/csce230j Giving credit where credit is due ost of slides for

More information

Computer Organization and Components

Computer Organization and Components Computer Organization and Components IS5, fall 25 Lecture : Pipelined Processors ssociate Professor, KTH Royal Institute of Technology ssistant Research ngineer, University of California, Berkeley Slides

More information

Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1

Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1 Pipeline Hazards Structure hazard Data hazard Pipeline hazard: the major hurdle A hazard is a condition that prevents an instruction in the pipe from executing its next scheduled pipe stage Taxonomy of

More information

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek Instruction Set Architecture or How to talk to computers if you aren t in Star Trek The Instruction Set Architecture Application Compiler Instr. Set Proc. Operating System I/O system Instruction Set Architecture

More information

Administration. Instruction scheduling. Modern processors. Examples. Simplified architecture model. CS 412 Introduction to Compilers

Administration. Instruction scheduling. Modern processors. Examples. Simplified architecture model. CS 412 Introduction to Compilers CS 4 Introduction to Compilers ndrew Myers Cornell University dministration Prelim tomorrow evening No class Wednesday P due in days Optional reading: Muchnick 7 Lecture : Instruction scheduling pr 0 Modern

More information

Computer Architecture TDTS10

Computer Architecture TDTS10 why parallelism? Performance gain from increasing clock frequency is no longer an option. Outline Computer Architecture TDTS10 Superscalar Processors Very Long Instruction Word Processors Parallel computers

More information

PROBLEMS #20,R0,R1 #$3A,R2,R4

PROBLEMS #20,R0,R1 #$3A,R2,R4 506 CHAPTER 8 PIPELINING (Corrisponde al cap. 11 - Introduzione al pipelining) PROBLEMS 8.1 Consider the following sequence of instructions Mul And #20,R0,R1 #3,R2,R3 #$3A,R2,R4 R0,R2,R5 In all instructions,

More information

Solutions. Solution 4.1. 4.1.1 The values of the signals are as follows:

Solutions. Solution 4.1. 4.1.1 The values of the signals are as follows: 4 Solutions Solution 4.1 4.1.1 The values of the signals are as follows: RegWrite MemRead ALUMux MemWrite ALUOp RegMux Branch a. 1 0 0 (Reg) 0 Add 1 (ALU) 0 b. 1 1 1 (Imm) 0 Add 1 (Mem) 0 ALUMux is the

More information

CPU Performance Equation

CPU Performance Equation CPU Performance Equation C T I T ime for task = C T I =Average # Cycles per instruction =Time per cycle =Instructions per task Pipelining e.g. 3-5 pipeline steps (ARM, SA, R3000) Attempt to get C down

More information

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2 Lecture Handout Computer Architecture Lecture No. 2 Reading Material Vincent P. Heuring&Harry F. Jordan Chapter 2,Chapter3 Computer Systems Design and Architecture 2.1, 2.2, 3.2 Summary 1) A taxonomy of

More information

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu. Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.tw Review Computers in mid 50 s Hardware was expensive

More information

IA-64 Application Developer s Architecture Guide

IA-64 Application Developer s Architecture Guide IA-64 Application Developer s Architecture Guide The IA-64 architecture was designed to overcome the performance limitations of today s architectures and provide maximum headroom for the future. To achieve

More information

EE282 Computer Architecture and Organization Midterm Exam February 13, 2001. (Total Time = 120 minutes, Total Points = 100)

EE282 Computer Architecture and Organization Midterm Exam February 13, 2001. (Total Time = 120 minutes, Total Points = 100) EE282 Computer Architecture and Organization Midterm Exam February 13, 2001 (Total Time = 120 minutes, Total Points = 100) Name: (please print) Wolfe - Solution In recognition of and in the spirit of the

More information

Instruction Set Design

Instruction Set Design Instruction Set Design Instruction Set Architecture: to what purpose? ISA provides the level of abstraction between the software and the hardware One of the most important abstraction in CS It s narrow,

More information

Thread Level Parallelism II: Multithreading

Thread Level Parallelism II: Multithreading Thread Level Parallelism II: Multithreading Readings: H&P: Chapter 3.5 Paper: NIAGARA: A 32-WAY MULTITHREADED Thread Level Parallelism II: Multithreading 1 This Unit: Multithreading (MT) Application OS

More information

! Metrics! Latency and throughput. ! Reporting performance! Benchmarking and averaging. ! CPU performance equation & performance trends

! Metrics! Latency and throughput. ! Reporting performance! Benchmarking and averaging. ! CPU performance equation & performance trends This Unit CIS 501 Computer Architecture! Metrics! Latency and throughput! Reporting performance! Benchmarking and averaging Unit 2: Performance! CPU performance equation & performance trends CIS 501 (Martin/Roth):

More information

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture?

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture? This Unit: Putting It All Together CIS 501 Computer Architecture Unit 11: Putting It All Together: Anatomy of the XBox 360 Game Console Slides originally developed by Amir Roth with contributions by Milo

More information

Unit 4: Performance & Benchmarking. Performance Metrics. This Unit. CIS 501: Computer Architecture. Performance: Latency vs.

Unit 4: Performance & Benchmarking. Performance Metrics. This Unit. CIS 501: Computer Architecture. Performance: Latency vs. This Unit CIS 501: Computer Architecture Unit 4: Performance & Benchmarking Metrics Latency and throughput Speedup Averaging CPU Performance Performance Pitfalls Slides'developed'by'Milo'Mar0n'&'Amir'Roth'at'the'University'of'Pennsylvania'

More information

Using Graphics and Animation to Visualize Instruction Pipelining and its Hazards

Using Graphics and Animation to Visualize Instruction Pipelining and its Hazards Using Graphics and Animation to Visualize Instruction Pipelining and its Hazards Per Stenström, Håkan Nilsson, and Jonas Skeppstedt Department of Computer Engineering, Lund University P.O. Box 118, S-221

More information

BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA

BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA AGENDA INTRO TO BEAGLEBONE BLACK HARDWARE & SPECS CORTEX-A8 ARMV7 PROCESSOR PROS & CONS VS RASPBERRY PI WHEN TO USE BEAGLEBONE BLACK Single

More information

Week 1 out-of-class notes, discussions and sample problems

Week 1 out-of-class notes, discussions and sample problems Week 1 out-of-class notes, discussions and sample problems Although we will primarily concentrate on RISC processors as found in some desktop/laptop computers, here we take a look at the varying types

More information

Pipeline Hazards. Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. Based on the material prepared by Arvind and Krste Asanovic

Pipeline Hazards. Arvind Computer Science and Artificial Intelligence Laboratory M.I.T. Based on the material prepared by Arvind and Krste Asanovic 1 Pipeline Hazards Computer Science and Artificial Intelligence Laboratory M.I.T. Based on the material prepared by and Krste Asanovic 6.823 L6-2 Technology Assumptions A small amount of very fast memory

More information

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Chapter 2 Basic Structure of Computers Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline Functional Units Basic Operational Concepts Bus Structures Software

More information

CHAPTER 4 MARIE: An Introduction to a Simple Computer

CHAPTER 4 MARIE: An Introduction to a Simple Computer CHAPTER 4 MARIE: An Introduction to a Simple Computer 4.1 Introduction 195 4.2 CPU Basics and Organization 195 4.2.1 The Registers 196 4.2.2 The ALU 197 4.2.3 The Control Unit 197 4.3 The Bus 197 4.4 Clocks

More information

Instruction Set Architecture. Datapath & Control. Instruction. LC-3 Overview: Memory and Registers. CIT 595 Spring 2010

Instruction Set Architecture. Datapath & Control. Instruction. LC-3 Overview: Memory and Registers. CIT 595 Spring 2010 Instruction Set Architecture Micro-architecture Datapath & Control CIT 595 Spring 2010 ISA =Programmer-visible components & operations Memory organization Address space -- how may locations can be addressed?

More information

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX Overview CISC Developments Over Twenty Years Classic CISC design: Digital VAX VAXÕs RISC successor: PRISM/Alpha IntelÕs ubiquitous 80x86 architecture Ð 8086 through the Pentium Pro (P6) RJS 2/3/97 Philosophy

More information

Computer organization

Computer organization Computer organization Computer design an application of digital logic design procedures Computer = processing unit + memory system Processing unit = control + datapath Control = finite state machine inputs

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic

More information

Design of Pipelined MIPS Processor. Sept. 24 & 26, 1997

Design of Pipelined MIPS Processor. Sept. 24 & 26, 1997 Design of Pipelined MIPS Processor Sept. 24 & 26, 1997 Topics Instruction processing Principles of pipelining Inserting pipe registers Data Hazards Control Hazards Exceptions MIPS architecture subset R-type

More information

Instruction Set Architecture

Instruction Set Architecture Instruction Set Architecture Consider x := y+z. (x, y, z are memory variables) 1-address instructions 2-address instructions LOAD y (r :=y) ADD y,z (y := y+z) ADD z (r:=r+z) MOVE x,y (x := y) STORE x (x:=r)

More information

(Refer Slide Time: 00:01:16 min)

(Refer Slide Time: 00:01:16 min) Digital Computer Organization Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture No. # 04 CPU Design: Tirning & Control

More information

Performance evaluation

Performance evaluation Performance evaluation Arquitecturas Avanzadas de Computadores - 2547021 Departamento de Ingeniería Electrónica y de Telecomunicaciones Facultad de Ingeniería 2015-1 Bibliography and evaluation Bibliography

More information

Course on Advanced Computer Architectures

Course on Advanced Computer Architectures Course on Advanced Computer Architectures Surname (Cognome) Name (Nome) POLIMI ID Number Signature (Firma) SOLUTION Politecnico di Milano, September 3rd, 2015 Prof. C. Silvano EX1A ( 2 points) EX1B ( 2

More information

The Microarchitecture of Superscalar Processors

The Microarchitecture of Superscalar Processors The Microarchitecture of Superscalar Processors James E. Smith Department of Electrical and Computer Engineering 1415 Johnson Drive Madison, WI 53706 ph: (608)-265-5737 fax:(608)-262-1267 email: jes@ece.wisc.edu

More information

EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000. ILP Execution

EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000. ILP Execution EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000 Lecture #11: Wednesday, 3 May 2000 Lecturer: Ben Serebrin Scribe: Dean Liu ILP Execution

More information

Quiz for Chapter 1 Computer Abstractions and Technology 3.10

Quiz for Chapter 1 Computer Abstractions and Technology 3.10 Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [15 points] Consider two different implementations,

More information

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy The Quest for Speed - Memory Cache Memory CSE 4, Spring 25 Computer Systems http://www.cs.washington.edu/4 If all memory accesses (IF/lw/sw) accessed main memory, programs would run 20 times slower And

More information

Introduction to CMOS VLSI Design (E158) Lecture 8: Clocking of VLSI Systems

Introduction to CMOS VLSI Design (E158) Lecture 8: Clocking of VLSI Systems Harris Introduction to CMOS VLSI Design (E158) Lecture 8: Clocking of VLSI Systems David Harris Harvey Mudd College David_Harris@hmc.edu Based on EE271 developed by Mark Horowitz, Stanford University MAH

More information

Data Level Parallelism

Data Level Parallelism Data Level Parallelism Reading: H&P: Appendix F Data Level Parallelism 1 This Unit: Data Level Parallelism Application OS Compiler Firmware CPU I/O Memory Data-level parallelism Vector processors Message-passing

More information

Chapter 1 Computer System Overview

Chapter 1 Computer System Overview Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides

More information

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip. Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide

More information

A Lab Course on Computer Architecture

A Lab Course on Computer Architecture A Lab Course on Computer Architecture Pedro López José Duato Depto. de Informática de Sistemas y Computadores Facultad de Informática Universidad Politécnica de Valencia Camino de Vera s/n, 46071 - Valencia,

More information

EE361: Digital Computer Organization Course Syllabus

EE361: Digital Computer Organization Course Syllabus EE361: Digital Computer Organization Course Syllabus Dr. Mohammad H. Awedh Spring 2014 Course Objectives Simply, a computer is a set of components (Processor, Memory and Storage, Input/Output Devices)

More information

Exploring the Design of the Cortex-A15 Processor ARM s next generation mobile applications processor. Travis Lanier Senior Product Manager

Exploring the Design of the Cortex-A15 Processor ARM s next generation mobile applications processor. Travis Lanier Senior Product Manager Exploring the Design of the Cortex-A15 Processor ARM s next generation mobile applications processor Travis Lanier Senior Product Manager 1 Cortex-A15: Next Generation Leadership Cortex-A class multi-processor

More information

Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to:

Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to: 55 Topic 3 Computer Performance Contents 3.1 Introduction...................................... 56 3.2 Measuring performance............................... 56 3.2.1 Clock Speed.................................

More information

Lecture 7: Clocking of VLSI Systems

Lecture 7: Clocking of VLSI Systems Lecture 7: Clocking of VLSI Systems MAH, AEN EE271 Lecture 7 1 Overview Reading Wolf 5.3 Two-Phase Clocking (good description) W&E 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.9, 5.5.10 - Clocking Note: The analysis

More information

UNIVERSITY OF CALIFORNIA, DAVIS Department of Electrical and Computer Engineering. EEC180B Lab 7: MISP Processor Design Spring 1995

UNIVERSITY OF CALIFORNIA, DAVIS Department of Electrical and Computer Engineering. EEC180B Lab 7: MISP Processor Design Spring 1995 UNIVERSITY OF CALIFORNIA, DAVIS Department of Electrical and Computer Engineering EEC180B Lab 7: MISP Processor Design Spring 1995 Objective: In this lab, you will complete the design of the MISP processor,

More information

Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level

Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level System: User s View System Components: High Level View Input Output 1 System: Motherboard Level 2 Components: Interconnection I/O MEMORY 3 4 Organization Registers ALU CU 5 6 1 Input/Output I/O MEMORY

More information

Central Processing Unit (CPU)

Central Processing Unit (CPU) Central Processing Unit (CPU) CPU is the heart and brain It interprets and executes machine level instructions Controls data transfer from/to Main Memory (MM) and CPU Detects any errors In the following

More information

U. Wisconsin CS/ECE 752 Advanced Computer Architecture I

U. Wisconsin CS/ECE 752 Advanced Computer Architecture I U. Wisconsin CS/ECE 752 Advanced Computer Architecture I Prof. David A. Wood Unit 0: Introduction Slides developed by Amir Roth of University of Pennsylvania with sources that included University of Wisconsin

More information

OC By Arsene Fansi T. POLIMI 2008 1

OC By Arsene Fansi T. POLIMI 2008 1 IBM POWER 6 MICROPROCESSOR OC By Arsene Fansi T. POLIMI 2008 1 WHAT S IBM POWER 6 MICROPOCESSOR The IBM POWER6 microprocessor powers the new IBM i-series* and p-series* systems. It s based on IBM POWER5

More information

Learning Outcomes. Simple CPU Operation and Buses. Composition of a CPU. A simple CPU design

Learning Outcomes. Simple CPU Operation and Buses. Composition of a CPU. A simple CPU design Learning Outcomes Simple CPU Operation and Buses Dr Eddie Edwards eddie.edwards@imperial.ac.uk At the end of this lecture you will Understand how a CPU might be put together Be able to name the basic components

More information

An Introduction to the ARM 7 Architecture

An Introduction to the ARM 7 Architecture An Introduction to the ARM 7 Architecture Trevor Martin CEng, MIEE Technical Director This article gives an overview of the ARM 7 architecture and a description of its major features for a developer new

More information

Module: Software Instruction Scheduling Part I

Module: Software Instruction Scheduling Part I Module: Software Instruction Scheduling Part I Sudhakar Yalamanchili, Georgia Institute of Technology Reading for this Module Loop Unrolling and Instruction Scheduling Section 2.2 Dependence Analysis Section

More information

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory Computer Organization and Architecture Chapter 4 Cache Memory Characteristics of Memory Systems Note: Appendix 4A will not be covered in class, but the material is interesting reading and may be used in

More information

CARNEGIE MELLON UNIVERSITY

CARNEGIE MELLON UNIVERSITY CARNEGIE MELLON UNIVERSITY VALUE LOCALITY AND SPECULATIVE EXECUTION A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS for the degree of DOCTOR OF PHILOSOPHY in

More information

Checkpoint Processing and Recovery: Towards Scalable Large Instruction Window Processors

Checkpoint Processing and Recovery: Towards Scalable Large Instruction Window Processors Checkpoint Processing and Recovery: Towards Scalable Large Instruction Window Processors Haitham Akkary Ravi Rajwar Srikanth T. Srinivasan Microprocessor Research Labs, Intel Corporation Hillsboro, Oregon

More information

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27 Logistics Week 1: Wednesday, Jan 27 Because of overcrowding, we will be changing to a new room on Monday (Snee 1120). Accounts on the class cluster (crocus.csuglab.cornell.edu) will be available next week.

More information

Midterm I SOLUTIONS March 21 st, 2012 CS252 Graduate Computer Architecture

Midterm I SOLUTIONS March 21 st, 2012 CS252 Graduate Computer Architecture University of California, Berkeley College of Engineering Computer Science Division EECS Spring 2012 John Kubiatowicz Midterm I SOLUTIONS March 21 st, 2012 CS252 Graduate Computer Architecture Your Name:

More information

Computer Systems Structure Main Memory Organization

Computer Systems Structure Main Memory Organization Computer Systems Structure Main Memory Organization Peripherals Computer Central Processing Unit Main Memory Computer Systems Interconnection Communication lines Input Output Ward 1 Ward 2 Storage/Memory

More information

Introduction to Microprocessors

Introduction to Microprocessors Introduction to Microprocessors Yuri Baida yuri.baida@gmail.com yuriy.v.baida@intel.com October 2, 2010 Moscow Institute of Physics and Technology Agenda Background and History What is a microprocessor?

More information

Introducción. Diseño de sistemas digitales.1

Introducción. Diseño de sistemas digitales.1 Introducción Adapted from: Mary Jane Irwin ( www.cse.psu.edu/~mji ) www.cse.psu.edu/~cg431 [Original from Computer Organization and Design, Patterson & Hennessy, 2005, UCB] Diseño de sistemas digitales.1

More information

Computer Organization and Architecture

Computer Organization and Architecture Computer Organization and Architecture Chapter 11 Instruction Sets: Addressing Modes and Formats Instruction Set Design One goal of instruction set design is to minimize instruction length Another goal

More information

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? Inside the CPU how does the CPU work? what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? some short, boring programs to illustrate the

More information

CPU Organization and Assembly Language

CPU Organization and Assembly Language COS 140 Foundations of Computer Science School of Computing and Information Science University of Maine October 2, 2015 Outline 1 2 3 4 5 6 7 8 Homework and announcements Reading: Chapter 12 Homework:

More information

StrongARM** SA-110 Microprocessor Instruction Timing

StrongARM** SA-110 Microprocessor Instruction Timing StrongARM** SA-110 Microprocessor Instruction Timing Application Note September 1998 Order Number: 278194-001 Information in this document is provided in connection with Intel products. No license, express

More information

Chapter 2 Logic Gates and Introduction to Computer Architecture

Chapter 2 Logic Gates and Introduction to Computer Architecture Chapter 2 Logic Gates and Introduction to Computer Architecture 2.1 Introduction The basic components of an Integrated Circuit (IC) is logic gates which made of transistors, in digital system there are

More information

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING 2013/2014 1 st Semester Sample Exam January 2014 Duration: 2h00 - No extra material allowed. This includes notes, scratch paper, calculator, etc.

More information

LSN 2 Computer Processors

LSN 2 Computer Processors LSN 2 Computer Processors Department of Engineering Technology LSN 2 Computer Processors Microprocessors Design Instruction set Processor organization Processor performance Bandwidth Clock speed LSN 2

More information

CS521 CSE IITG 11/23/2012

CS521 CSE IITG 11/23/2012 CS521 CSE TG 11/23/2012 A Sahu 1 Degree of overlap Serial, Overlapped, d, Super pipelined/superscalar Depth Shallow, Deep Structure Linear, Non linear Scheduling of operations Static, Dynamic A Sahu slide

More information

CHAPTER 7: The CPU and Memory

CHAPTER 7: The CPU and Memory CHAPTER 7: The CPU and Memory The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides

More information

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware This Unit: Virtual CIS 501 Computer Architecture Unit 5: Virtual App App App System software Mem CPU I/O The operating system (OS) A super-application Hardware support for an OS Virtual memory Page tables

More information

Lecture 10: Sequential Circuits

Lecture 10: Sequential Circuits Introduction to CMOS VLSI esign Lecture 10: Sequential Circuits avid Harris Harvey Mudd College Spring 2004 Outline q Sequencing q Sequencing Element esign q Max and Min-elay q Clock Skew q Time Borrowing

More information

Data Dependences. A data dependence occurs whenever one instruction needs a value produced by another.

Data Dependences. A data dependence occurs whenever one instruction needs a value produced by another. Data Hazards 1 Hazards: Key Points Hazards cause imperfect pipelining They prevent us from achieving CPI = 1 They are generally causes by counter flow data pennces in the pipeline Three kinds Structural

More information

Lecture 11: Sequential Circuit Design

Lecture 11: Sequential Circuit Design Lecture 11: Sequential Circuit esign Outline Sequencing Sequencing Element esign Max and Min-elay Clock Skew Time Borrowing Two-Phase Clocking 2 Sequencing Combinational logic output depends on current

More information

Thread level parallelism

Thread level parallelism Thread level parallelism ILP is used in straight line code or loops Cache miss (off-chip cache and main memory) is unlikely to be hidden using ILP. Thread level parallelism is used instead. Thread: process

More information

A s we saw in Chapter 4, a CPU contains three main sections: the register section,

A s we saw in Chapter 4, a CPU contains three main sections: the register section, 6 CPU Design A s we saw in Chapter 4, a CPU contains three main sections: the register section, the arithmetic/logic unit (ALU), and the control unit. These sections work together to perform the sequences

More information

Execution Cycle. Pipelining. IF and ID Stages. Simple MIPS Instruction Formats

Execution Cycle. Pipelining. IF and ID Stages. Simple MIPS Instruction Formats Execution Cycle Pipelining CSE 410, Spring 2005 Computer Systems http://www.cs.washington.edu/410 1. Instruction Fetch 2. Instruction Decode 3. Execute 4. Memory 5. Write Back IF and ID Stages 1. Instruction

More information

"JAGUAR AMD s Next Generation Low Power x86 Core. Jeff Rupley, AMD Fellow Chief Architect / Jaguar Core August 28, 2012

JAGUAR AMD s Next Generation Low Power x86 Core. Jeff Rupley, AMD Fellow Chief Architect / Jaguar Core August 28, 2012 "JAGUAR AMD s Next Generation Low Power x86 Core Jeff Rupley, AMD Fellow Chief Architect / Jaguar Core August 28, 2012 TWO X86 CORES TUNED FOR TARGET MARKETS Mainstream Client and Server Markets Bulldozer

More information

Energy-Efficient, High-Performance Heterogeneous Core Design

Energy-Efficient, High-Performance Heterogeneous Core Design Energy-Efficient, High-Performance Heterogeneous Core Design Raj Parihar Core Design Session, MICRO - 2012 Advanced Computer Architecture Lab, UofR, Rochester April 18, 2013 Raj Parihar Energy-Efficient,

More information

CS 6290 I/O and Storage. Milos Prvulovic

CS 6290 I/O and Storage. Milos Prvulovic CS 6290 I/O and Storage Milos Prvulovic Storage Systems I/O performance (bandwidth, latency) Bandwidth improving, but not as fast as CPU Latency improving very slowly Consequently, by Amdahl s Law: fraction

More information

Intel 8086 architecture

Intel 8086 architecture Intel 8086 architecture Today we ll take a look at Intel s 8086, which is one of the oldest and yet most prevalent processor architectures around. We ll make many comparisons between the MIPS and 8086

More information

The 104 Duke_ACC Machine

The 104 Duke_ACC Machine The 104 Duke_ACC Machine The goal of the next two lessons is to design and simulate a simple accumulator-based processor. The specifications for this processor and some of the QuartusII design components

More information

Effective ahead pipelining of instruction block address generation

Effective ahead pipelining of instruction block address generation Effective ahead pipelining of instruction block address generation to appear in proceedings of the 30th ACM-IEEE International Symposium on Computer Architecture, 9-11 June 2003, San Diego André Seznec

More information

High Performance Processor Architecture. André Seznec IRISA/INRIA ALF project-team

High Performance Processor Architecture. André Seznec IRISA/INRIA ALF project-team High Performance Processor Architecture André Seznec IRISA/INRIA ALF project-team 1 2 Moore s «Law» Nb of transistors on a micro processor chip doubles every 18 months 1972: 2000 transistors (Intel 4004)

More information

Architecture and Implementation of the ARM Cortex -A8 Microprocessor

Architecture and Implementation of the ARM Cortex -A8 Microprocessor Architecture and Implementation of the ARM Cortex -A8 Microprocessor October 2005 Introduction The ARM Cortex -A8 microprocessor is the first applications microprocessor in ARM s new Cortex family. With

More information

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit

More information