Lecture 18: Snooping vs. Directory Based Coherency Professor David A. Patterson Computer Science 252 Fall 1996

Size: px
Start display at page:

Download "Lecture 18: Snooping vs. Directory Based Coherency Professor David A. Patterson Computer Science 252 Fall 1996"

Transcription

1 Lecture 18: Snooping vs. Directory Based Coherency Professor David A. Patterson Computer Science 252 Fall 1996 DAP.F96 1

2 Layers: Review: Parallel Framework Programming Model: Programming Model Communication Abstraction Interconnection SW/OS Interconnection HW» Multiprogramming : lots of jobs, no communication» Shared address space: communicate via memory» Message passing: send and recieve messages» Data Parallel: several agents operate on several data sets simultaneously and then exchange information globally and simultaneously (shared or message passing) Communication Abstraction:» Shared address space: e.g., load, store, atomic swap» Message passing: e.g., send, recieve library calls» Debate over this topic (ease of programming, scaling) => many hardware designs 1:1 programming model DAP.F96 2

3 Review: Small-Scale MP Designs Memory: centralized with uniform access time ( uma ) and bus interconnect Examples: Sun Enterprise 5000, SGI Challenge, Intel SystemPro DAP.F96 3

4 Review: Large-Scale MP Designs Memory: distributed with nonuniform access time ( numa ) and scalable interconnect (distributed memory) Examples: T3E: (see Ch. 1, Figs 1-21, page 45 of [CSG96]) 1 cycle 40 cycles 100 cycles Low Latency High Reliability DAP.F96 4

5 Data Parallel Model Operations can be performed in parallel on each element of a large regular data structure, such as an array 1 Control Processsor broadcast to many PEs (see Ch. 1, Figs 1-26, page 51 of [CSG96]) When computers were large, could amortize the control portion of many replicated PEs Data distributed in each memory Condition flag per PE so that can skip Early 1980s VLSI => SIMD rebirth: 32 1-bit PEs + memory on a chip was the PE Data parallel programming languages lay out data to processor DAP.F96 5

6 Data Parallel Model Vector processors have similar ISAs, but no data placement restriction Advancing VLSI led to single chip FPUs and whole fast µprocs SIMD programming model led to Single Program Multiple Data (SPMD) model All processors execute identical program Data parallel programming languages still useful, do communication all at once: Bulk Synchronous phases in which all communicate after a global barrier DAP.F96 6

7 Convergence in Parallel Architecture Complete computers connected to scalable network via communication assist (see Ch. 1, Fig. 1-29, page 57 of [CSG96]) Different programming models place different requirements on communication assist Shared address space: tight integration with memory to capture memory events that interact with others + to accept requests from other nodes Message passing: send messages quickly and respond to incoming messages: tag match, allocate buffer, transfer data, wait for receive posting Data Parallel: fast global synchronization HPF shared-memory, data parallel; PVM, MPI message passing libraries; both work on many machines, different implementations DAP.F96 7

8 Fundamental Issues: Naming Naming: how to solve large problem fast what data is shared how it is addressed what operations can access data how processes refer to each other Choice of naming affects code produced by a compiler; via load where just remember address or keep track of processor number and local virtual address Choice of naming affects replication of data; via load in cache memory hierachy or via SW replication and consistency DAP.F96 8

9 Fundamental Issues: Naming Global physical address space: any processor can generate and address and access it in a single operation memory can be anywhere: virtual addr. translation handles it Global virtual address space: if the address space of each process can be configured to contain all shared data of the parallel program Segmented shared address space: if locations are named <process number, address> uniformly for all processes of the parallel program DAP.F96 9

10 Fundamental Issues: Synchronization To cooperate, processes must coordinate Message passing is implicit coordination with transmission or arrival of data Shared address => additional operations to explicitly coordinate: e.g., write a flag, awaken a thread, interrupt a processor DAP.F96 10

11 Bandwidth Fundamental Issues: Latency and Bandwidth Need high bandwidth in communication Cannot scale, but stay close Make limits in network, memory, and processor match Overhead to communicate is a problem in many machines Latency Affects performance, since processor may have to wait Affects ease of programming, since requires more thought to overlap communication and computation Latency Hiding How can a mechanism help hide latency? Examples: overlap message send with computation, prefetch DAP.F96 11

12 Small-Scale Shared Memory Caches serve to: Increase bandwidth versus bus/memory Reduce latency of access Valuable for both private data and shared data What about cache consistency? DAP.F96 12

13 The Problem of Cache Coherency DAP.F96 13

14 What Does Coherency Mean? Informally: Any read must return the most recent write Too strict and very difficult to implement Better: Any write must eventually be seen by a read All writes are seen in proper order ( serialization ) Two rules to ensure this: If P writes x and P1 reads it, P s write will be seen by P1 if the read and write are sufficiently far apart Writes to a single location are serialized: seen in one order» Latest write will be seen» Otherewise could see writes in illogical order (could see older value after a newer value) DAP.F96 14

15 CS 252 Administrivia Next reading is Chapter 8 of CA:AQA 2/e and Sections , Chapter 1 of upcoming book by Culler, Singh, and Gupta: Remzi Arpaci will talk Fri. 11/8 on Networks of Workstations and world record sort Dr. Dan Lenowski, architect of SGI Origin, talk in Systems Seminar Thur. 11/14 at 4PM in 306 Soda Next project review: survey due Mon. 11/11; 20 min. meetings moved to Fri. 11/15; signup Wed. 11/6 DAP.F96 15

16 Potential Solutions Snooping Solution (Snoopy Bus): Send all requests for data to all processors Processors snoop to see if they have a copy and respond accordingly Requires broadcast, since caching information is at processors Works well with bus (natural broadcast medium) Dominates for small scale machines (most of the market) Directory-Based Schemes Keep track of what is being shared in one centralized place Distributed memory => distributed directory (avoids bottlenecks) Send point-to-point requests to processors Scales better than Snoop Actually existed BEFORE Snoop-based schemes DAP.F96 16

17 Basic Snoopy Protocols Write Invalidate Protocol: Multiple readers, single writer Write to shared data: an invalidate is sent to all caches which snoop and invalidate any copies Read Miss:» Write-through: memory is always up-to-date» Write-back: snoop in caches to find most recent copy Write Broadcast Protocol: Write to shared data: broadcast on bus, processors snoop, and update copies Read miss: memory is always up-to-date Write serialization: bus serializes requests Bus is single point of arbitration DAP.F96 17

18 Basic Snoopy Protocols Write Invalidate versus Broadcast: Invalidate requires one transaction per write-run Invalidate uses spatial locality: one transaction per block Broadcast has lower latency between write and read Broadcast: BW (increased) vs. latency (decreased) tradeoff Name Protocol Type Memory-write policy Machines using Write Once Write invalidate Write back First snoopy protocol. after first write Synapse N+1 Write invalidate Write back 1st cache-coherent MPs Berkeley Write invalidate Write back Berkeley SPUR Illinois Write invalidate Write back SGI Power and Challenge Firefly Write broadcast Write back private, Write through shared SPARCCenter 2000 MESI Write invalidate Write back Pentium, PowerPC DAP.F96 18

19 Snoop Cache Variations Basic Protocol Exclusive Shared Invalid Berkeley Protocol Owned Exclusive Owned Shared Shared Invalid Illinois Protocol Private Dirty Private Clean Shared Invalid MESI Protocol Modfied (private, Memory) exclusive (private,=memory) Shared (shared,=memory) Invalid Owner can update via bus invalidate operation Owner must write back when replaced in cache If read sourced from memory, then Private Clean if read sourced from other cache, then Shared Can write in cache if held private clean or dirty DAP.F96 19

20 An Example Snoopy Protocol Invalidation protocol, write-back cache Each block of memory is in one state: Clean in all caches and up-to-date in memory (Shared) OR Dirty in exactly one cache (Exclusive) OR Not in any caches Each cache block is in one state: Shared : block can be read OR Exclusive : cache has only copy, its writeable, and dirty OR Invalid : block contains no data Read misses: cause all caches to snoop bus Writes to clean line are treated as misses DAP.F96 20

21 Snoopy-Cache State Machine-I CPU Read hit State machine for CPU requests Invalid CPU Write Place write miss on bus CPU Read Place read miss on bus CPU read miss Write back block Shared (read/only) CPU Read miss Place read miss on bus Cache Block State CPU read hit CPU write hit Exclusive (read/ write) CPU Write Place Write Miss on Bus CPU Write Miss Write back cache block Place write miss on bus DAP.F96 21

22 Snoopy-Cache State Machine-II Invalid Write miss for this block Shared (read/only) State machine for bus requests Write miss for this block Exclusive (read/ write) Write Back Block; abort memory access Read miss for this block Write Back Block; abort memory access DAP.F96 22

23 Snoop Cache: State Machine Extensions: Fourth State: Ownership Clean-> dirty, need invalidate only (upgrade request), don t read memory Berkeley Protocol Clean exclusive state (no miss for private data on write) MESI Protocol Cache supplies data when shared state (no memory access) Illinois Protocol DAP.F96 23

24 Example P1 P2 Bus Memory step State Addr Value State Addr Value Action Proc. Addr Value Addr Value P1: Write 10 to A1 P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 Assumes A1 and A2 map to same cache block DAP.F96 24

25 Example P1 P2 Bus Memory step State Addr Value State Addr Value Action Proc. Addr Value Addr Value P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1 P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 Assumes A1 and A2 map to same cache block DAP.F96 25

26 Example P1 P2 Bus Memory step State Addr Value State Addr Value Action Proc. Addr Value Addr Value P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1 P1: Read A1 Excl. A1 10 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 Assumes A1 and A2 map to same cache block DAP.F96 26

27 Example P1 P2 Bus Memory step State Addr Value State Addr Value Action Proc. Addr Value Addr Value P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 WrBk P1 A Shar. A1 10 RdDa P2 A P2: Write 20 to A1 10 P2: Write 40 to A Assumes A1 and A2 map to same cache block DAP.F96 27

28 Example P1 P2 Bus Memory step State Addr Value State Addr Value Action Proc. Addr Value Addr Value P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 WrBk P1 A Shar. A1 10 RdDa P2 A P2: Write 20 to A1 Inv. Excl. A1 20 WrMs P2 A1 10 P2: Write 40 to A Assumes A1 and A2 map to same cache block DAP.F96 28

29 Example P1 P2 Bus Memory step State Addr Value State Addr Value Action Proc. Addr Value Addr Value P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 WrBk P1 A1 10 A1 10 Shar. A1 10 RdDa P2 A P2: Write 20 to A1 Inv. Excl. A1 20 WrMs P2 A1 10 P2: Write 40 to A2 WrMs P2 A2 10 Excl. A2 40 WrBk P2 A1 20 A1 20 Assumes A1 and A2 map to same cache block DAP.F96 29

30 Implementation Complications Write Races: Cannot update cache until bus is obtained» Otherwise, another processor may get bus first, and write the same cache block Two step process:» Arbitrate for bus» Place miss on bus and complete operation If miss occurs to block while waiting for bus, handle miss (invalidate may be needed) and then restart. Split transaction bus:» Bus transaction is not atomic: can have multiple outstanding transactions for a block» Multiple misses can interleave, allowing two caches to grab block in the Exclusive state» Must track and prevent multiple misses for one block Must support interventions and invalidations DAP.F96 30

31 Implementing Snooping Caches Multiple processors must be on bus, access to both addresses and data Add a few new commands to perform coherency, in addition to read and write Processors continuously snoop on address bus If address matches tag, either invalidate or update DAP.F96 31

32 Implementing Snooping Caches Bus serializes writes, getting bus ensures no one else can perform memory operation On a miss in a write back cache, may have the desired copy and its dirty, so must reply Add extra state bit to cache to determine shared or not Since every bus transaction checks cache tags, could interfere with CPU just to check: solution 1: duplicate set of tags for L1 caches just to allow checks in parallel with CPU solution 2: L2 cache that obeys inclusion with L1 cache DAP.F96 32

33 Larger MPs Separate Memory per Processor Local or Remote access via memory controller Cache Coherency solution: non-cached pages Alternative: directory per cache that tracks state of every block in every cache Which caches have a copies of block, dirty vs. clean,... Info per memory block vs. per cache block? PLUS: In memory => simpler protocol (centralized/one location) MINUS: In memory => directory is ƒ(memory size) vs. ƒ(cache size) Prevent directory as bottleneck: distribute directory entries with memory, each keeping track of which Procs have copies of their blocks DAP.F96 33

34 Distributed Directory MPs DAP.F96 34

35 Directory Protocol Similar to Snoopy Protocol: Three states Shared: 1 processors have data, memory up-to-date Uncached (no processor hasit; not valid in any cache) Exclusive: 1 processor (owner) has data; memory outof-date In addition to cache state, must track which processors have data when in the shared state (usually bit vector, 1 if processor has copy) Keep it simple(r): Writes to non-exclusive data => write miss Processor blocks until access completes Assume messages received and acted upon in order sent DAP.F96 35

36 Directory Protocol No bus and don t want to broadcast: interconnect no longer single arbitration point all messages have explicit responses Terms: Local node is the node where a request originates Home node is the node where the memory location of an address resides Remote node is the node that has a copy of a cache block, whether exclusive or shared Example messages on next slide: P = processor number, A = address DAP.F96 36

37 Directory Protocol Messages Message type Source Destination Msg Read miss Local cache Home directory P, A Processor P reads data at address A; send data and make P a read sharer Write miss Local cache Home directory P, A Processor P writes data at address A; send data and make P the exclusive owner Invalidate Home directory Remote caches A Invalidate a shared copy at address A. Fetch Home directory Remote cache A Fetch the block at address A and send it to its home directory Fetch/Invalidate Home directory Remote cache A Fetch the block at address A and send it to its home directory; invalidate the block in the cache Data value reply Home directory Local cache Data Return a data value from the home memory Data write-back Remote cache Home directory A, Data Write-back a data value for address A DAP.F96 37

38 State Transition Diagram for an Individual Cache Block in a Directory Based System Invalid Exclusive Shared States identical to snoopy case; transactions very similar. Tranistions caused by read misses, write misses, invalidates, data fetch req. Generates read miss & write miss msg to home directory. Write misses that were broadcast on the bus => explicit invalidate & data fetch requests. DAP.F96 38

39 State Transition Diagram for the Directory Sharers={} WrBk Uncached Exclusive Data Value Reply Sharers = Sharers+{P} WrMs RdMs WrMs Shared Fetch/Invalidate Sharers={P} RdMs Data Value Reply Sharers = Sharers+{P} Same states & structure as the transition diagram for an individual cache 2 actions: update of directory state & send msgs to statisfy req. Tracks all copies of memory block. Also indicate an action that updates the sharing set, Sharers, as opposed to sending a message. DAP.F96 39

40 Example Directory Protocol Message sent to directory causes two actions: Update the directory More messages to satisfy request Block is in Uncached state: the copy in memory is the current value; only possible requests for that block are: Read miss: requesting processor sent data from memory & requestor made only sharing node; state of block made Shared. Write miss: requesting processor is sent the value & becomes the Sharing node. The block is made Exclusive to indicate that the only valid copy is cached. Sharers indicates the identity of the owner. Block is Shared => the memory value is up-to-date: Read miss: requesting processor is sent back the data from memory & requesting processor is added to the sharing set. Write miss: requesting processor is sent the value. All processors in the set Sharers are sent invalidate messages, & Sharers is set to identity of requesting processor. The state of the block is made Exclusive. DAP.F96 40

41 Example Directory Protocol Block is Exclusive: current value of the block is held in the cache of the processor identified by the set Sharers (the owner) => three possible directory requests: Read miss: owner processor sent data fetch message, which causes state of block in owner s cache to transition to Shared and causes owner to send data to directory, where it is written to memory & sent back to requesting processor. Identity of requesting processor is added to set Sharers, which still contains the identity of the processor that was the owner (since it still has a readable copy). Data write-back: owner processor is replacing the block and hence must write it back. This makes the memory copy up-todate (the home directory essentially becomes the owner), the block is now uncached, and the Sharer set is empty. Write miss: block has a new owner. A message is sent to old owner causing the cache to send the value of the block to the directory from which it is sent to the requesting processor, which becomes the new owner. Sharers is set to identity of new owner, DAP.F96 41 and state of block is made Exclusive.

42 Implementing a Directory We assume operations atomic, but they are not; reality is much harder; must avoid deadlock when run out of bufffers in network (see Appendix E) Optimizations: read miss or write miss in Exclusive: send data directly to requestor from owner vs. 1st to memory and then from memory to requestor DAP.F96 42

43 Example step P1: Write 10 to A1 P1 P2 Bus Directory Memory StateAddr ValueStateAddrValueActionProc.AddrValue AddrState{Procs}Value P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block DAP.F96 43

44 Example P1 P2 Bus Directory Memory step StateAddr ValueStateAddrValueActionProc.AddrValue AddrState{Procs}Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block DAP.F96 44

45 Example P1 P2 Bus Directory Memory step StateAddr ValueStateAddrValueActionProc.AddrValue AddrState{Procs}Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block DAP.F96 45

46 Example P1 P2 Bus Directory Memory step StateAddr ValueStateAddrValueActionProc.AddrValue AddrState{Procs}Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A Shar. A1 10 DaRp P2 A1 10 A1 Shar.{P1,P2} 10 P2: Write 20 to A P2: Write 40 to A2 10 A1 and A2 map to the same cache block DAP.F96 46

47 Example P1 P2 Bus Directory Memory step StateAddr ValueStateAddrValueActionProc.AddrValue AddrState{Procs}Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A Shar. A1 10 DaRp P2 A1 10 A1 Shar.{P1,P2} 10 P2: Write 20 to A1 Excl. A1 20 WrMs P2 A1 10 Inv. Inval. P1 A1 A1 Excl. {P2} 10 P2: Write 40 to A2 10 A1 and A2 map to the same cache block DAP.F96 47

48 Example P1 P2 Bus Directory Memory step StateAddr ValueStateAddrValueActionProc.AddrValue AddrState{Procs}Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A Shar. A1 10 DaRp P2 A1 10 A1 Shar.{P1,P2} 10 P2: Write 20 to A1 Excl. A1 20 WrMs P2 A1 10 Inv. Inval. P1 A1 A1 Excl. {P2} 10 P2: Write 40 to A2 WrMs P2 A2 A2 Excl. {P2} 0 WrBk P2 A1 20 A1 Unca. {} 20 Excl. A2 40 DaRp P2 A2 0 A2 Excl. {P2} 0 A1 and A2 map to the same cache block DAP.F96 48

49 Miss Rates for Snooping Protocol 4th C: Conflict, Capacity, Compulsory and Coherency Misses More processors: increase coherency misses while decreasing capacity misses since more cache memory (for fixed problem size) Cache behavior of Five Parallel Programs: FFT Fast Fourier Transform: Matrix transposition + computation LU factorization of dense 2D matrix (linear algebra) Barnes-Hut n-body algorithm solving galaxy evolution probem Ocean simluates influence of eddy & boundary currents on large-scale flow in ocean: dynamic arrays per grid DAP.F96 49

50 Miss Rates for Snooping Protocol High Capacity Misses Miss Rate Miss Rate 20% 18% 16% 14% 12% 10% 8% 6% 8% 8% 8% 8% 8% 14% 18% 15% 13% 9% Big differences in miss rates among the programs 4% 2% 0% # of processors 2% 2% 2% 2% 2% 1% 1% 1% 1% 1% Ocean 1% 1% 1% 1% 1% fft lu barnes ocean volrend Cache size is 64KB, 2-way set associative, with 32B blocks. Misses in these applications are generated by accesses to data that is potentially shared. Except for Ocean, data is heavily shared; in Ocean only the boundaries of the subgrids are shared, though the entire grid is treated as a shared data object. Since the boundaries change as we increase the processor count (for a fixed size problem), different amounts of the grid become shared. The anamolous increase in miss rate for Ocean in moving from 1 to 2 processors arises because of conflict misses in accessing the subgrids. DAP.F96 50

51 % Misses Caused by Coherency Traffic vs. # of Processors Miss Rate 80% 70% 60% 50% 40% 30% 20% 10% 0% 80% of misses due to coherency misses! FFT Processor Count fft lu barnes ocean volrend LU Barnes Ocean Volrend % cache misses caused by coherency transactions typically rises when a fixed size problem is run on more processors. The absolute number of coherency misses is increasing in all these benchmarks, including Ocean. In Ocean, however, it is difficult to separate out these misses from others, since the amount of sharing of the grid varies with processor count. Invalidations increases significantly; In FFT, the miss rate arising from coherency misses increases from nothing to almost 7%. DAP.F96 51

52 Miss Rate Miss Rates as Increase Cache Size/Processor 20% Miss Rate 18% 16% 14% 12% 10% 8% 6% 4% 2% 0% Ocean LU Cache Size in KB FFT Cache Size Ocean and FFT strongly influenced by capacity misses Volrend Barnes fft lu barnes ocean volrend Miss rate drops as the cache size is increased, unless the miss rate is dominated by coherency misses. The block size is 32B & the cache is 2-way set-associative. The processor count is fixed at 16 processors. DAP.F96 52

53 Miss Rate vs. Block Size Since cache block hold multiple words, may get coherency traffic for unrelated variables in same block False sharing arises from the use of an invalidationbased coherency algorithm. It occurs when a block is invalidated (and a subsequent reference causes a miss) because some word in the block, other than the one being read, is written into. Miss Rate 14% 13% 13% 12% 10% 9% 8% 8% 6% 6% 5% 5% 4% 4% 4% 2% 2% 1% 0% 1% 1% 1% 1% 1% 1% 1% 1% 0% fft lu barnes ocean volrend miss rates mostly fall with increasing block size DAP.F96 53

54 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % Misses Caused by Coherency Traffic vs. Block Size FFT communicates data in large blocks & communication adapts to the block size (it is a parameter to fft lu barnes ocean FFT Barnes LU volrend Ocean Volrend Behavior tracks cache size behavior FFT: Coherence misses reduced faster than capacity misses! the code); makes effective use of large blocks. Ocean competing effects that favor different block size Accesses to the boundary of each subgrid, in one direction the accesses match the array layout, taking advantage of large blocks, while in the other dimension, they do not match. These two effects largely cancel each other out leading to an overall decrease in the coherency misses as well as the DAP.F96 54 capacity misses.

55 Bus Traffic as Increase Block Size Bytes per data ref Bytes per data reference Ocean FFT LU Huge Increases in bus traffic due to coherency! fft lu barnes ocean volrend Volrend Bus traffic climbs steadily as the block size is increased. The factor of 3 increase in traffic for Ocean is the best argument against larger block sizes. Remember that our protocol treats ownership misses the same as other misses, slightly increasing the penalty for large cache blocks: in both Ocean and FFT this effect accounts for less than 10% of the traffic. DAP.F96 55

56 Miss Rate 7% 6% 5% 4% 3% 2% 1% 0% 5% 5% 5% 5% 1% Miss Rates for Directory Use larger cache to circumvent longer latencies to directories Cache size is 128 KB, 2-way Ocean set associative, with 64B Miss Rate blocks (cover longer latency) 1% 1% 1% 0% 0% 0% fft lu barnes ocean volrend 0% 6% 4% # of Processors 3% 7% 1% 1% 1% 1% Ocean: only the boundaries of the subgrids are shared. Since the boundaries change as we increase the processor count (for a fixed size problem), different amounts of the grid become shared. The increase in miss rate for Ocean in moving from 32 to 64 processors arises because of conflict misses in accessing small subgrids & for coherency misses for 64 DAP.F96 56 processors.

57 Miss Rates as Increase Cache Size/Processor for Directory 18% 16% 14% 18% 13% Miss rate drops as the cache size is increased, unless the miss rate is dominated by coherency misses. Miss Rate 12% 10% 8% 6% 4% 2% 0% 9% 8% 7% 5% 4% 2% 2% 2% 1% 1% 1% 1% 0% 0% 0% 9% 7% 5% The block size is 64B and the cache is 2-way setassociative. The processor count is fixed at 16 processors. 1% 1% 1% 1% 1% fft lu barnes ocean volrend DAP.F96 57

58 14% 12% Block Size for Directory Assumes 128 KB cache & 64 processors Large cache size to combat higher memory latencies than snoop caches 13% 12% 10% 9% Miss Rate 8% 6% 4% 7% 5% 3% 3% 7% 5% 2% 0% 2% 1% 1% 1% 1% 1% 0% 0% 0% 0% 0% fft lu barnes ocean volrend DAP.F96 58

59 Summary Caches contain all information on state of cached memory blocks Snooping and Directory Protocols similar; bus makes snooping easier because of broadcast Directory has extra data structure to keep track of state of all cache blocks Distributing directory => scalable shared address multiprocessor DAP.F96 59

Lecture 16: Networks & Interconnect (Internetworking, Protocols, Examples) and Intro to Multiprocessors

Lecture 16: Networks & Interconnect (Internetworking, Protocols, Examples) and Intro to Multiprocessors Lecture 16: Networks & Interconnect (Internetworking, Protocols, Examples) and Intro to Multiprocessors Professor David A. Patterson Computer Science 252 Fall 1996 DAP.F96 1 Review: Interconnect Issues

More information

CSE 502 Graduate Computer Architecture Lec 18-19 Directory-Based Shared-Memory Multiprocessors & MP Synchronization

CSE 502 Graduate Computer Architecture Lec 18-19 Directory-Based Shared-Memory Multiprocessors & MP Synchronization CSE 502 Graduate Computer Architecture Lec 18-19 Directory-Based Shared-Memory Multiprocessors & MP Synchronization Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse502

More information

Multiprocessor Cache Coherence

Multiprocessor Cache Coherence Multiprocessor Cache Coherence M M BUS P P P P The goal is to make sure that READ(X) returns the most recent value of the shared variable X, i.e. all valid copies of a shared variable are identical. 1.

More information

CMSC 611: Advanced Computer Architecture

CMSC 611: Advanced Computer Architecture CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis Parallel Computers Definition: A parallel computer is a collection of processing

More information

Chapter 12: Multiprocessor Architectures. Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1

Chapter 12: Multiprocessor Architectures. Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1 Chapter 12: Multiprocessor Architectures Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1 Objective To understand cache coherence problem To learn the methods used to solve

More information

Vorlesung Rechnerarchitektur 2 Seite 178 DASH

Vorlesung Rechnerarchitektur 2 Seite 178 DASH Vorlesung Rechnerarchitektur 2 Seite 178 Architecture for Shared () The -architecture is a cache coherent, NUMA multiprocessor system, developed at CSL-Stanford by John Hennessy, Daniel Lenoski, Monica

More information

Symmetric Multiprocessing

Symmetric Multiprocessing Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called

More information

Lecture 23: Multiprocessors

Lecture 23: Multiprocessors Lecture 23: Multiprocessors Today s topics: RAID Multiprocessor taxonomy Snooping-based cache coherence protocol 1 RAID 0 and RAID 1 RAID 0 has no additional redundancy (misnomer) it uses an array of disks

More information

High Performance Computing. Course Notes 2007-2008. HPC Fundamentals

High Performance Computing. Course Notes 2007-2008. HPC Fundamentals High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs

More information

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do

More information

18-742 Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two

18-742 Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two age 1 18-742 Lecture 4 arallel rogramming II Spring 2005 rof. Babak Falsafi http://www.ece.cmu.edu/~ece742 write X Memory send X Memory read X Memory Slides developed in part by rofs. Adve, Falsafi, Hill,

More information

Measuring and Characterizing Cache Coherence Traffic

Measuring and Characterizing Cache Coherence Traffic Measuring and Characterizing Cache Coherence Traffic Tuan Bui, Brian Greskamp, I-Ju Liao, Mike Tucknott CS433, Spring 2004 Abstract Cache coherence protocols for sharedmemory multiprocessors have been

More information

Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007

Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007 Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer

More information

COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook)

COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) Vivek Sarkar Department of Computer Science Rice University vsarkar@rice.edu COMP

More information

Synchronization. Todd C. Mowry CS 740 November 24, 1998. Topics. Locks Barriers

Synchronization. Todd C. Mowry CS 740 November 24, 1998. Topics. Locks Barriers Synchronization Todd C. Mowry CS 740 November 24, 1998 Topics Locks Barriers Types of Synchronization Mutual Exclusion Locks Event Synchronization Global or group-based (barriers) Point-to-point tightly

More information

Chapter 2 Parallel Architecture, Software And Performance

Chapter 2 Parallel Architecture, Software And Performance Chapter 2 Parallel Architecture, Software And Performance UCSB CS140, T. Yang, 2014 Modified from texbook slides Roadmap Parallel hardware Parallel software Input and output Performance Parallel program

More information

Chapter 18: Database System Architectures. Centralized Systems

Chapter 18: Database System Architectures. Centralized Systems Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

Parallel Programming

Parallel Programming Parallel Programming Parallel Architectures Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 Parallel Architectures Acknowledgements Prof. Felix

More information

Principles and characteristics of distributed systems and environments

Principles and characteristics of distributed systems and environments Principles and characteristics of distributed systems and environments Definition of a distributed system Distributed system is a collection of independent computers that appears to its users as a single

More information

What is a bus? A Bus is: Advantages of Buses. Disadvantage of Buses. Master versus Slave. The General Organization of a Bus

What is a bus? A Bus is: Advantages of Buses. Disadvantage of Buses. Master versus Slave. The General Organization of a Bus Datorteknik F1 bild 1 What is a bus? Slow vehicle that many people ride together well, true... A bunch of wires... A is: a shared communication link a single set of wires used to connect multiple subsystems

More information

Computer Systems Structure Input/Output

Computer Systems Structure Input/Output Computer Systems Structure Input/Output Peripherals Computer Central Processing Unit Main Memory Computer Systems Interconnection Communication lines Input Output Ward 1 Ward 2 Examples of I/O Devices

More information

Parallel Algorithm Engineering

Parallel Algorithm Engineering Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis

More information

64-Bit versus 32-Bit CPUs in Scientific Computing

64-Bit versus 32-Bit CPUs in Scientific Computing 64-Bit versus 32-Bit CPUs in Scientific Computing Axel Kohlmeyer Lehrstuhl für Theoretische Chemie Ruhr-Universität Bochum March 2004 1/25 Outline 64-Bit and 32-Bit CPU Examples

More information

High Performance Computer Architecture

High Performance Computer Architecture High Performance Computer Architecture Volker Lindenstruth Lehrstuhl für Hochleistungsrechner Archittektur Ruth-Moufang Str. 1 email: ti@compeng.de URL: www.compeng.de Telefon: 798-44100 Volker Lindenstruth

More information

Performance Evaluation of 2D-Mesh, Ring, and Crossbar Interconnects for Chip Multi- Processors. NoCArc 09

Performance Evaluation of 2D-Mesh, Ring, and Crossbar Interconnects for Chip Multi- Processors. NoCArc 09 Performance Evaluation of 2D-Mesh, Ring, and Crossbar Interconnects for Chip Multi- Processors NoCArc 09 Jesús Camacho Villanueva, José Flich, José Duato Universidad Politécnica de Valencia December 12,

More information

The Orca Chip... Heart of IBM s RISC System/6000 Value Servers

The Orca Chip... Heart of IBM s RISC System/6000 Value Servers The Orca Chip... Heart of IBM s RISC System/6000 Value Servers Ravi Arimilli IBM RISC System/6000 Division 1 Agenda. Server Background. Cache Heirarchy Performance Study. RS/6000 Value Server System Structure.

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Scalability evaluation of barrier algorithms for OpenMP

Scalability evaluation of barrier algorithms for OpenMP Scalability evaluation of barrier algorithms for OpenMP Ramachandra Nanjegowda, Oscar Hernandez, Barbara Chapman and Haoqiang H. Jin High Performance Computing and Tools Group (HPCTools) Computer Science

More information

Parallel Processing and Software Performance. Lukáš Marek

Parallel Processing and Software Performance. Lukáš Marek Parallel Processing and Software Performance Lukáš Marek DISTRIBUTED SYSTEMS RESEARCH GROUP http://dsrg.mff.cuni.cz CHARLES UNIVERSITY PRAGUE Faculty of Mathematics and Physics Benchmarking in parallel

More information

CHAPTER 7: The CPU and Memory

CHAPTER 7: The CPU and Memory CHAPTER 7: The CPU and Memory The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides

More information

CS550. Distributed Operating Systems (Advanced Operating Systems) Instructor: Xian-He Sun

CS550. Distributed Operating Systems (Advanced Operating Systems) Instructor: Xian-He Sun CS550 Distributed Operating Systems (Advanced Operating Systems) Instructor: Xian-He Sun Email: sun@iit.edu, Phone: (312) 567-5260 Office hours: 2:10pm-3:10pm Tuesday, 3:30pm-4:30pm Thursday at SB229C,

More information

CSE 160 Lecture 5. The Memory Hierarchy False Sharing Cache Coherence and Consistency. Scott B. Baden

CSE 160 Lecture 5. The Memory Hierarchy False Sharing Cache Coherence and Consistency. Scott B. Baden CSE 160 Lecture 5 The Memory Hierarchy False Sharing Cache Coherence and Consistency Scott B. Baden Using Bang coming down the home stretch Do not use Bang s front end for running mergesort Use batch,

More information

Parallel Computing 37 (2011) 26 41. Contents lists available at ScienceDirect. Parallel Computing. journal homepage: www.elsevier.

Parallel Computing 37 (2011) 26 41. Contents lists available at ScienceDirect. Parallel Computing. journal homepage: www.elsevier. Parallel Computing 37 (2011) 26 41 Contents lists available at ScienceDirect Parallel Computing journal homepage: www.elsevier.com/locate/parco Architectural support for thread communications in multi-core

More information

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

Quiz for Chapter 6 Storage and Other I/O Topics 3.10 Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [6 points] Give a concise answer to each

More information

Distributed Systems. REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1

Distributed Systems. REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1 Distributed Systems REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1 1 The Rise of Distributed Systems! Computer hardware prices are falling and power increasing.!

More information

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization Lesson Objectives To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization AE3B33OSD Lesson 1 / Page 2 What is an Operating System? A

More information

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont. CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont. Instructors: Vladimir Stojanovic & Nicholas Weaver http://inst.eecs.berkeley.edu/~cs61c/ 1 Bare 5-Stage Pipeline Physical Address PC Inst.

More information

- Nishad Nerurkar. - Aniket Mhatre

- Nishad Nerurkar. - Aniket Mhatre - Nishad Nerurkar - Aniket Mhatre Single Chip Cloud Computer is a project developed by Intel. It was developed by Intel Lab Bangalore, Intel Lab America and Intel Lab Germany. It is part of a larger project,

More information

Efficient Built-In NoC Support for Gather Operations in Invalidation-Based Coherence Protocols

Efficient Built-In NoC Support for Gather Operations in Invalidation-Based Coherence Protocols Universitat Politècnica de València Master Thesis Efficient Built-In NoC Support for Gather Operations in Invalidation-Based Coherence Protocols Author: Mario Lodde Advisor: Prof. José Flich Cardo A thesis

More information

A Review of Customized Dynamic Load Balancing for a Network of Workstations

A Review of Customized Dynamic Load Balancing for a Network of Workstations A Review of Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer Science Department, University of Rochester

More information

361 Computer Architecture Lecture 14: Cache Memory

361 Computer Architecture Lecture 14: Cache Memory 1 361 Computer Architecture Lecture 14 Memory cache.1 The Motivation for s Memory System Processor DRAM Motivation Large memories (DRAM) are slow Small memories (SRAM) are fast Make the average access

More information

Rethinking SIMD Vectorization for In-Memory Databases

Rethinking SIMD Vectorization for In-Memory Databases SIGMOD 215, Melbourne, Victoria, Australia Rethinking SIMD Vectorization for In-Memory Databases Orestis Polychroniou Columbia University Arun Raghavan Oracle Labs Kenneth A. Ross Columbia University Latest

More information

Homework # 2. Solutions. 4.1 What are the differences among sequential access, direct access, and random access?

Homework # 2. Solutions. 4.1 What are the differences among sequential access, direct access, and random access? ECE337 / CS341, Fall 2005 Introduction to Computer Architecture and Organization Instructor: Victor Manuel Murray Herrera Date assigned: 09/19/05, 05:00 PM Due back: 09/30/05, 8:00 AM Homework # 2 Solutions

More information

AMD Opteron Quad-Core

AMD Opteron Quad-Core AMD Opteron Quad-Core a brief overview Daniele Magliozzi Politecnico di Milano Opteron Memory Architecture native quad-core design (four cores on a single die for more efficient data sharing) enhanced

More information

OC By Arsene Fansi T. POLIMI 2008 1

OC By Arsene Fansi T. POLIMI 2008 1 IBM POWER 6 MICROPROCESSOR OC By Arsene Fansi T. POLIMI 2008 1 WHAT S IBM POWER 6 MICROPOCESSOR The IBM POWER6 microprocessor powers the new IBM i-series* and p-series* systems. It s based on IBM POWER5

More information

Chapter 2 Parallel Computer Architecture

Chapter 2 Parallel Computer Architecture Chapter 2 Parallel Computer Architecture The possibility for a parallel execution of computations strongly depends on the architecture of the execution platform. This chapter gives an overview of the general

More information

How To Understand The Concept Of A Distributed System

How To Understand The Concept Of A Distributed System Distributed Operating Systems Introduction Ewa Niewiadomska-Szynkiewicz and Adam Kozakiewicz ens@ia.pw.edu.pl, akozakie@ia.pw.edu.pl Institute of Control and Computation Engineering Warsaw University of

More information

Pitfalls of Object Oriented Programming

Pitfalls of Object Oriented Programming Sony Computer Entertainment Europe Research & Development Division Pitfalls of Object Oriented Programming Tony Albrecht Technical Consultant Developer Services A quick look at Object Oriented (OO) programming

More information

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

More information

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France

More information

Lecture 18: Interconnection Networks. CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012)

Lecture 18: Interconnection Networks. CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012) Lecture 18: Interconnection Networks CMU 15-418: Parallel Computer Architecture and Programming (Spring 2012) Announcements Project deadlines: - Mon, April 2: project proposal: 1-2 page writeup - Fri,

More information

EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000. ILP Execution

EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000. ILP Execution EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000 Lecture #11: Wednesday, 3 May 2000 Lecturer: Ben Serebrin Scribe: Dean Liu ILP Execution

More information

Why the Network Matters

Why the Network Matters Week 2, Lecture 2 Copyright 2009 by W. Feng. Based on material from Matthew Sottile. So Far Overview of Multicore Systems Why Memory Matters Memory Architectures Emerging Chip Multiprocessors (CMP) Increasing

More information

DISTRIBUTED AND PARALLELL DATABASE

DISTRIBUTED AND PARALLELL DATABASE DISTRIBUTED AND PARALLELL DATABASE SYSTEMS Tore Risch Uppsala Database Laboratory Department of Information Technology Uppsala University Sweden http://user.it.uu.se/~torer PAGE 1 What is a Distributed

More information

An Implementation Of Multiprocessor Linux

An Implementation Of Multiprocessor Linux An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than

More information

Computer Organization & Architecture Lecture #19

Computer Organization & Architecture Lecture #19 Computer Organization & Architecture Lecture #19 Input/Output The computer system s I/O architecture is its interface to the outside world. This architecture is designed to provide a systematic means of

More information

Client/Server Computing Distributed Processing, Client/Server, and Clusters

Client/Server Computing Distributed Processing, Client/Server, and Clusters Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the

More information

The Big Picture. Cache Memory CSE 675.02. Memory Hierarchy (1/3) Disk

The Big Picture. Cache Memory CSE 675.02. Memory Hierarchy (1/3) Disk The Big Picture Cache Memory CSE 5.2 Computer Processor Memory (active) (passive) Control (where ( brain ) programs, Datapath data live ( brawn ) when running) Devices Input Output Keyboard, Mouse Disk,

More information

Parallelism and Cloud Computing

Parallelism and Cloud Computing Parallelism and Cloud Computing Kai Shen Parallel Computing Parallel computing: Process sub tasks simultaneously so that work can be completed faster. For instances: divide the work of matrix multiplication

More information

MPI and Hybrid Programming Models. William Gropp www.cs.illinois.edu/~wgropp

MPI and Hybrid Programming Models. William Gropp www.cs.illinois.edu/~wgropp MPI and Hybrid Programming Models William Gropp www.cs.illinois.edu/~wgropp 2 What is a Hybrid Model? Combination of several parallel programming models in the same program May be mixed in the same source

More information

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING 2013/2014 1 st Semester Sample Exam January 2014 Duration: 2h00 - No extra material allowed. This includes notes, scratch paper, calculator, etc.

More information

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts

More information

Parallel Programming Survey

Parallel Programming Survey Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

More information

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network

More information

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat Why Computers Are Getting Slower The traditional approach better performance Why computers are

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

ADVANCED COMPUTER ARCHITECTURE: Parallelism, Scalability, Programmability

ADVANCED COMPUTER ARCHITECTURE: Parallelism, Scalability, Programmability ADVANCED COMPUTER ARCHITECTURE: Parallelism, Scalability, Programmability * Technische Hochschule Darmstadt FACHBEREiCH INTORMATIK Kai Hwang Professor of Electrical Engineering and Computer Science University

More information

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

COMPUTER HARDWARE. Input- Output and Communication Memory Systems COMPUTER HARDWARE Input- Output and Communication Memory Systems Computer I/O I/O devices commonly found in Computer systems Keyboards Displays Printers Magnetic Drives Compact disk read only memory (CD-ROM)

More information

Big Graph Processing: Some Background

Big Graph Processing: Some Background Big Graph Processing: Some Background Bo Wu Colorado School of Mines Part of slides from: Paul Burkhardt (National Security Agency) and Carlos Guestrin (Washington University) Mines CSCI-580, Bo Wu Graphs

More information

Switch Fabric Implementation Using Shared Memory

Switch Fabric Implementation Using Shared Memory Order this document by /D Switch Fabric Implementation Using Shared Memory Prepared by: Lakshmi Mandyam and B. Kinney INTRODUCTION Whether it be for the World Wide Web or for an intra office network, today

More information

An Interconnection Network for a Cache Coherent System on FPGAs. Vincent Mirian

An Interconnection Network for a Cache Coherent System on FPGAs. Vincent Mirian An Interconnection Network for a Cache Coherent System on FPGAs by Vincent Mirian A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department

More information

Chapter 1 Computer System Overview

Chapter 1 Computer System Overview Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides

More information

UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS

UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS Structure Page Nos. 2.0 Introduction 27 2.1 Objectives 27 2.2 Types of Classification 28 2.3 Flynn s Classification 28 2.3.1 Instruction Cycle 2.3.2 Instruction

More information

Understanding Hardware Transactional Memory

Understanding Hardware Transactional Memory Understanding Hardware Transactional Memory Gil Tene, CTO & co-founder, Azul Systems @giltene 2015 Azul Systems, Inc. Agenda Brief introduction What is Hardware Transactional Memory (HTM)? Cache coherence

More information

Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

More information

Reliable Adaptable Network RAM

Reliable Adaptable Network RAM Reliable Adaptable Network RAM Tia Newhall, Daniel Amato, Alexandr Pshenichkin Computer Science Department, Swarthmore College Swarthmore, PA 19081, USA Abstract We present reliability solutions for adaptable

More information

StrongARM** SA-110 Microprocessor Instruction Timing

StrongARM** SA-110 Microprocessor Instruction Timing StrongARM** SA-110 Microprocessor Instruction Timing Application Note September 1998 Order Number: 278194-001 Information in this document is provided in connection with Intel products. No license, express

More information

Real Time Programming: Concepts

Real Time Programming: Concepts Real Time Programming: Concepts Radek Pelánek Plan at first we will study basic concepts related to real time programming then we will have a look at specific programming languages and study how they realize

More information

OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA

OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA OpenCL Optimization San Jose 10/2/2009 Peng Wang, NVIDIA Outline Overview The CUDA architecture Memory optimization Execution configuration optimization Instruction optimization Summary Overall Optimization

More information

Lizy Kurian John Electrical and Computer Engineering Department, The University of Texas as Austin

Lizy Kurian John Electrical and Computer Engineering Department, The University of Texas as Austin BUS ARCHITECTURES Lizy Kurian John Electrical and Computer Engineering Department, The University of Texas as Austin Keywords: Bus standards, PCI bus, ISA bus, Bus protocols, Serial Buses, USB, IEEE 1394

More information

Computer Architecture Prof. Mainak Chaudhuri Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Computer Architecture Prof. Mainak Chaudhuri Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Computer Architecture Prof. Mainak Chaudhuri Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture - 35 Input/Output (Refer Slide Time: 00:13) So, today we will

More information

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27 Logistics Week 1: Wednesday, Jan 27 Because of overcrowding, we will be changing to a new room on Monday (Snee 1120). Accounts on the class cluster (crocus.csuglab.cornell.edu) will be available next week.

More information

GPU Hardware Performance. Fall 2015

GPU Hardware Performance. Fall 2015 Fall 2015 Atomic operations performs read-modify-write operations on shared or global memory no interference with other threads for 32-bit and 64-bit integers (c. c. 1.2), float addition (c. c. 2.0) using

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

Annotation to the assignments and the solution sheet. Note the following points

Annotation to the assignments and the solution sheet. Note the following points Computer rchitecture 2 / dvanced Computer rchitecture Seite: 1 nnotation to the assignments and the solution sheet This is a multiple choice examination, that means: Solution approaches are not assessed

More information

McMPI. Managed-code MPI library in Pure C# Dr D Holmes, EPCC dholmes@epcc.ed.ac.uk

McMPI. Managed-code MPI library in Pure C# Dr D Holmes, EPCC dholmes@epcc.ed.ac.uk McMPI Managed-code MPI library in Pure C# Dr D Holmes, EPCC dholmes@epcc.ed.ac.uk Outline Yet another MPI library? Managed-code, C#, Windows McMPI, design and implementation details Object-orientation,

More information

Distributed Systems LEEC (2005/06 2º Sem.)

Distributed Systems LEEC (2005/06 2º Sem.) Distributed Systems LEEC (2005/06 2º Sem.) Introduction João Paulo Carvalho Universidade Técnica de Lisboa / Instituto Superior Técnico Outline Definition of a Distributed System Goals Connecting Users

More information

CS 575 Parallel Processing

CS 575 Parallel Processing CS 575 Parallel Processing Lecture one: Introduction Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5

More information

Chapter 11 I/O Management and Disk Scheduling

Chapter 11 I/O Management and Disk Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization

More information

Summer Student Project Report

Summer Student Project Report Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months

More information

Multicore Processor and GPU. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/

Multicore Processor and GPU. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/ Multicore Processor and GPU Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/ Moore s Law The number of transistors on integrated circuits doubles approximately every two years! CPU performance

More information

SWEL: Hardware Cache Coherence Protocols to Map Shared Data onto Shared Caches

SWEL: Hardware Cache Coherence Protocols to Map Shared Data onto Shared Caches SWEL: Hardware Cache Coherence Protocols to Map Shared Data onto Shared Caches Seth H. Pugsley, Josef B. Spjut, David W. Nellans, Rajeev Balasubramonian School of Computing, University of Utah {pugsley,

More information

Concept of Cache in web proxies

Concept of Cache in web proxies Concept of Cache in web proxies Chan Kit Wai and Somasundaram Meiyappan 1. Introduction Caching is an effective performance enhancing technique that has been used in computer systems for decades. However,

More information

A Deduplication File System & Course Review

A Deduplication File System & Course Review A Deduplication File System & Course Review Kai Li 12/13/12 Topics A Deduplication File System Review 12/13/12 2 Traditional Data Center Storage Hierarchy Clients Network Server SAN Storage Remote mirror

More information

Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com

Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com CSCI-GA.3033-012 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com Modern GPU

More information

Embedded Parallel Computing

Embedded Parallel Computing Embedded Parallel Computing Lecture 5 - The anatomy of a modern multiprocessor, the multicore processors Tomas Nordström Course webpage:: Course responsible and examiner: Tomas

More information

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing How is it possible for each process to have contiguous addresses and so many of them? Computer Systems Organization (Spring ) CSCI-UA, Section Instructor: Joanna Klukowska Teaching Assistants: Paige Connelly

More information

Multi-Threading Performance on Commodity Multi-Core Processors

Multi-Threading Performance on Commodity Multi-Core Processors Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction

More information