Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007
|
|
|
- Robyn Todd
- 9 years ago
- Views:
Transcription
1 Multi-core architectures Jernej Barbic , Spring 2007 May 3,
2 Single-core computer 2
3 Single-core CPU chip the single core 3
4 Multi-core architectures This lecture is about a new trend in computer architecture: Replicate multiple processor cores on a single die. Core 1 Core 2 Core 3 Core 4 Multi-core CPU chip 4
5 Multi-core CPU chip The cores fit on a single processor socket Also called CMP (Chip Multi-Processor) c o r e c o r e c o r e c o r e
6 The cores run in parallel thread 1 thread 2 thread 3 thread 4 c o r e c o r e c o r e c o r e
7 Within each core, threads are time-sliced (just like on a uniprocessor) several threads several threads several threads several threads c o r e c o r e c o r e c o r e
8 Interaction with the Operating System OS perceives each core as a separate processor OS scheduler maps threads/processes to different cores Most major OS support multi-core today: Windows, Linux, Mac OS X, 8
9 Why multi-core? Difficult to make single-core clock frequencies even higher Deeply pipelined circuits: heat problems speed of light problems difficult design and verification large design teams necessary server farms need expensive air-conditioning Many new applications are multithreaded General trend in computer architecture (shift towards more parallelism) 9
10 Instruction-level parallelism Parallelism at the machine-instruction level The processor can re-order, pipeline instructions, split them into microinstructions, do aggressive branch prediction, etc. Instruction-level parallelism enabled rapid increases in processor speeds over the last 15 years 10
11 Thread-level parallelism (TLP) This is parallelism on a more coarser scale Server can serve each client in a separate thread (Web server, database server) A computer game can do AI, graphics, and physics in three separate threads Single-core superscalar processors cannot fully exploit TLP Multi-core architectures are the next step in processor evolution: explicitly exploiting TLP 11
12 General context: Multiprocessors Multiprocessor is any computer with several processors SIMD Single instruction, multiple data Modern graphics cards MIMD Multiple instructions, multiple data Lemieux cluster, Pittsburgh supercomputing center 12
13 Multiprocessor memory types Shared memory: In this model, there is one (large) common shared memory for all processors Distributed memory: In this model, each processor has its own (small) local memory, and its content is not replicated anywhere else 13
14 Multi-core processor is a special kind of a multiprocessor: All processors are on the same chip Multi-core processors are MIMD: Different cores execute different threads (Multiple Instructions), operating on different parts of memory (Multiple Data). Multi-core is a shared memory multiprocessor: All cores share the same memory 14
15 What applications benefit from multi-core? Database servers Web servers (Web commerce) Compilers Multimedia applications Scientific applications, CAD/CAM In general, applications with Thread-level parallelism (as opposed to instructionlevel parallelism) Each can run on its own core 15
16 More examples Editing a photo while recording a TV show through a digital video recorder Downloading software while running an anti-virus program Anything that can be threaded today will map efficiently to multi-core BUT: some applications difficult to parallelize 16
17 A technique complementary to multi-core: Simultaneous multithreading Problem addressed: The processor pipeline can get stalled: Waiting for the result of a long floating point (or integer) operation Waiting for data to arrive from memory Other execution units wait unused L2 Cache and Control Bus Integer BTB L1 D-Cache D-TLB Schedulers Uop queues Rename/Alloc Trace Cache Decoder BTB and I-TLB Floating Point ucode ROM Source: Intel 17
18 Simultaneous multithreading (SMT) Permits multiple independent threads to execute SIMULTANEOUSLY on the SAME core Weaving together multiple threads on the same core Example: if one thread is waiting for a floating point operation to complete, another thread can use the integer units 18
19 Without SMT, only a single thread can run at any given time L1 D-Cache D-TLB L2 Cache and Control Integer Floating Point Schedulers Uop queues Rename/Alloc BTB Trace Cache ucode ROM Decoder Bus BTB and I-TLB Thread 1: floating point 19
20 Without SMT, only a single thread can run at any given time L1 D-Cache D-TLB L2 Cache and Control Integer Floating Point Schedulers Uop queues Rename/Alloc BTB Trace Cache ucode ROM Decoder Bus BTB and I-TLB Thread 2: integer operation 20
21 SMT processor: both threads can run concurrently L1 D-Cache D-TLB L2 Cache and Control Integer Floating Point Schedulers Uop queues Rename/Alloc BTB Trace Cache ucode ROM Decoder Bus BTB and I-TLB Thread 2: Thread 1: floating point integer operation 21
22 But: Can t simultaneously use the same functional unit L1 D-Cache D-TLB L2 Cache and Control Integer Floating Point Schedulers Uop queues Rename/Alloc BTB Trace Cache ucode ROM Bus Decoder BTB and I-TLB Thread 1 Thread 2 IMPOSSIBLE This scenario is impossible with SMT on a single core (assuming a single integer unit) 22
23 SMT not a true parallel processor Enables better threading (e.g. up to 30%) OS and applications perceive each simultaneous thread as a separate virtual processor The chip has only a single copy of each resource Compare to multi-core: each core has its own copy of resources 23
24 Multi-core: threads can run on separate cores L1 D-Cache D-TLB L1 D-Cache D-TLB L2 Cache and Control Integer BTB Schedulers Uop queues Rename/Alloc Trace Cache Decoder Floating Point ucode ROM L2 Cache and Control Integer BTB Schedulers Uop queues Rename/Alloc Trace Cache Decoder Floating Point ucode ROM Bus BTB and I-TLB Bus BTB and I-TLB Thread 1 Thread 2 24
25 Multi-core: threads can run on separate cores L1 D-Cache D-TLB L1 D-Cache D-TLB L2 Cache and Control Integer BTB Schedulers Uop queues Rename/Alloc Trace Cache Decoder Floating Point ucode ROM L2 Cache and Control Integer BTB Schedulers Uop queues Rename/Alloc Trace Cache Decoder Floating Point ucode ROM Bus BTB and I-TLB Bus BTB and I-TLB Thread 3 Thread 4 25
26 Combining Multi-core and SMT Cores can be SMT-enabled (or not) The different combinations: Single-core, non-smt: standard uniprocessor Single-core, with SMT Multi-core, non-smt Multi-core, with SMT: our fish machines The number of SMT threads: 2, 4, or sometimes 8 simultaneous threads Intel calls them hyper-threads 26
27 SMT Dual-core: all four threads can run concurrently L1 D-Cache D-TLB L1 D-Cache D-TLB L2 Cache and Control Integer BTB Schedulers Uop queues Rename/Alloc Trace Cache Decoder Floating Point ucode ROM L2 Cache and Control Integer BTB Schedulers Uop queues Rename/Alloc Trace Cache Decoder Floating Point ucode ROM Bus BTB and I-TLB Bus BTB and I-TLB Thread 1 Thread 3 Thread 2 Thread 4 27
28 Comparison: multi-core vs SMT Advantages/disadvantages? 28
29 Comparison: multi-core vs SMT Multi-core: Since there are several cores, each is smaller and not as powerful (but also easier to design and manufacture) However, great with thread-level parallelism SMT Can have one large and fast superscalar core Great performance on a single thread Mostly still only exploits instruction-level parallelism 29
30 The memory hierarchy If simultaneous multithreading only: all s shared Multi-core chips: L1 s private L2 s private in some architectures and shared in others Memory is always shared 30
31 Fish machines Dual-core Intel Xeon processors Each core is hyper-threaded Private L1 s Shared L2 s C O R E 1 L1 hyper-threads C O R E 0 L2 memory L1 31
32 Designs with private L2 s C O R E 1 L1 C O R E 0 L1 C O R E 1 L1 C O R E 0 L1 L2 L2 L2 L2 memory L3 L3 Both L1 and L2 are private Examples: AMD Opteron, AMD Athlon, Intel Pentium D memory A design with L3 s Example: Intel Itanium 2 32
33 Private vs shared s? Advantages/disadvantages? 33
34 Private vs shared s Advantages of private: They are closer to core, so faster access Reduces contention Advantages of shared: Threads on different cores can share the same data More space available if a single (or a few) high-performance thread runs on the system 34
35 The coherence problem Since we have private s: How to keep the data consistent across s? Each core should perceive the memory as a monolithic array, shared by all the cores 35
36 The coherence problem Suppose variable x initially contains Core 1 Core 2 Core 3 Core 4 Main memory x=15213 multi-core chip 36
37 The coherence problem Core 1 reads x Core 1 Core 2 Core 3 Core 4 x=15213 Main memory x=15213 multi-core chip 37
38 The coherence problem Core 2 reads x Core 1 Core 2 Core 3 Core 4 x=15213 x=15213 Main memory x=15213 multi-core chip 38
39 The coherence problem Core 1 writes to x, setting it to Core 1 Core 2 Core 3 Core 4 x=21660 x=15213 multi-core chip Main memory x=21660 assuming write-through s 39
40 The coherence problem Core 2 attempts to read x gets a stale copy Core 1 Core 2 Core 3 Core 4 x=21660 x=15213 Main memory x=21660 multi-core chip 40
41 Solutions for coherence This is a general problem with multiprocessors, not limited just to multi-core There exist many solution algorithms, coherence protocols, etc. A simple solution: invalidation-based protocol with snooping 41
42 Inter-core bus Core 1 Core 2 Core 3 Core 4 Main memory multi-core chip inter-core bus 42
43 Invalidation protocol with snooping Invalidation: If a core writes to a data item, all other copies of this data item in other s are invalidated Snooping: All cores continuously snoop (monitor) the bus connecting the cores. 43
44 The coherence problem Revisited: Cores 1 and 2 have both read x Core 1 Core 2 Core 3 Core 4 x=15213 x=15213 Main memory x=15213 multi-core chip 44
45 The coherence problem Core 1 writes to x, setting it to Core 1 Core 2 Core 3 Core 4 x=21660 x=15213 sends invalidation request Main memory x=21660 INVALIDATED assuming write-through s multi-core chip inter-core bus 45
46 The coherence problem After invalidation: Core 1 Core 2 Core 3 Core 4 x=21660 Main memory x=21660 multi-core chip 46
47 The coherence problem Core 2 reads x. Cache misses, and loads the new copy. Core 1 Core 2 Core 3 Core 4 x=21660 x=21660 Main memory x=21660 multi-core chip 47
48 Alternative to invalidate protocol: update protocol Core 1 writes x=21660: Core 1 Core 2 Core 3 Core 4 x=21660 x=21660 UPDATED broadcasts updated value Main memory x=21660 assuming write-through s multi-core chip inter-core bus 48
49 Which do you think is better? Invalidation or update? 49
50 Invalidation vs update Multiple writes to the same location invalidation: only the first time update: must broadcast each write (which includes new variable value) Invalidation generally performs better: it generates less bus traffic 50
51 Invalidation protocols This was just the basic invalidation protocol More sophisticated protocols use extra state bits MSI, MESI (Modified, Exclusive, Shared, Invalid) 51
52 Programming for multi-core Programmers must use threads or processes Spread the workload across multiple cores Write parallel algorithms OS will map threads/processes to cores 52
53 Thread safety very important Pre-emptive context switching: context switch can happen AT ANY TIME True concurrency, not just uniprocessor time-slicing Concurrency bugs exposed much faster with multi-core 53
54 However: Need to use synchronization even if only time-slicing on a uniprocessor int counter=0; void thread1() { int temp1=counter; counter = temp1 + 1; } void thread2() { int temp2=counter; counter = temp2 + 1; } 54
55 Need to use synchronization even if only time-slicing on a uniprocessor temp1=counter; counter = temp1 + 1; temp2=counter; counter = temp2 + 1 gives counter=2 temp1=counter; temp2=counter; counter = temp1 + 1; counter = temp2 + 1 gives counter=1 55
56 Assigning threads to the cores Each thread/process has an affinity mask Affinity mask specifies what cores the thread is allowed to run on Different threads can have different masks Affinities are inherited across fork() 56
57 Affinity masks are bit vectors Example: 4-way multi-core, without SMT core 3 core 2 core 1 core 0 Process/thread is allowed to run on cores 0,2,3, but not on core 1 57
58 Affinity masks when multi-core and SMT combined Separate bits for each simultaneous thread Example: 4-way multi-core, 2 threads per core core 3 core 2 core 1 core 0 thread 1 thread 0 thread 1 thread 0 thread 1 thread 0 thread 1 thread 0 Core 2 can t run the process Core 1 can only use one simultaneous thread 58
59 Default Affinities Default affinity mask is all 1s: all threads can run on all processors Then, the OS scheduler decides what threads run on what core OS scheduler detects skewed workloads, migrating threads to less busy processors 59
60 Process migration is costly Need to restart the execution pipeline Cached data is invalidated OS scheduler tries to avoid migration as much as possible: it tends to keeps a thread on the same core This is called soft affinity 60
61 Hard affinities The programmer can prescribe her own affinities (hard affinities) Rule of thumb: use the default scheduler unless a good reason not to 61
62 When to set your own affinities Two (or more) threads share data-structures in memory map to same core so that can share Real-time threads: Example: a thread running a robot controller: - must not be context switched, or else robot can go unstable - dedicate an entire core just to this thread Source: Sensable.com 62
63 Kernel scheduler API #include <sched.h> int sched_getaffinity(pid_t pid, unsigned int len, unsigned long * mask); Retrieves the current affinity mask of process pid and stores it into space pointed to by mask. len is the system word size: sizeof(unsigned int long) 63
64 Kernel scheduler API #include <sched.h> int sched_setaffinity(pid_t pid, unsigned int len, unsigned long * mask); Sets the current affinity mask of process pid to *mask len is the system word size: sizeof(unsigned int long) To query affinity of a running process: [barbic@bonito ~]$ taskset -p 3935 pid 3935's current affinity mask: f 64
65 Windows Task Manager core 2 core 1 65
66 Legal licensing issues Will software vendors charge a separate license per each core or only a single license per chip? Microsoft, Red Hat Linux, Suse Linux will license their OS per chip, not per core 66
67 Conclusion Multi-core chips an important new trend in computer architecture Several new multi-core chips in design phases Parallel programming techniques likely to gain importance 67
Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.
Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide
Multicore Processor and GPU. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/
Multicore Processor and GPU Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/ Moore s Law The number of transistors on integrated circuits doubles approximately every two years! CPU performance
Multi-Threading Performance on Commodity Multi-Core Processors
Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction
Introduction to Cloud Computing
Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic
Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) [email protected] http://www.mzahran.com
CSCI-GA.3033-012 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) [email protected] http://www.mzahran.com Modern GPU
Generations of the computer. processors.
. Piotr Gwizdała 1 Contents 1 st Generation 2 nd Generation 3 rd Generation 4 th Generation 5 th Generation 6 th Generation 7 th Generation 8 th Generation Dual Core generation Improves and actualizations
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Multithreading Lin Gao cs9244 report, 2006
Multithreading Lin Gao cs9244 report, 2006 2 Contents 1 Introduction 5 2 Multithreading Technology 7 2.1 Fine-grained multithreading (FGMT)............. 8 2.2 Coarse-grained multithreading (CGMT)............
Vorlesung Rechnerarchitektur 2 Seite 178 DASH
Vorlesung Rechnerarchitektur 2 Seite 178 Architecture for Shared () The -architecture is a cache coherent, NUMA multiprocessor system, developed at CSL-Stanford by John Hennessy, Daniel Lenoski, Monica
Operating System Impact on SMT Architecture
Operating System Impact on SMT Architecture The work published in An Analysis of Operating System Behavior on a Simultaneous Multithreaded Architecture, Josh Redstone et al., in Proceedings of the 9th
~ Greetings from WSU CAPPLab ~
~ Greetings from WSU CAPPLab ~ Multicore with SMT/GPGPU provides the ultimate performance; at WSU CAPPLab, we can help! Dr. Abu Asaduzzaman, Assistant Professor and Director Wichita State University (WSU)
Multi-Core Programming
Multi-Core Programming Increasing Performance through Software Multi-threading Shameem Akhter Jason Roberts Intel PRESS Copyright 2006 Intel Corporation. All rights reserved. ISBN 0-9764832-4-6 No part
Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015
Operating Systems 05. Threads Paul Krzyzanowski Rutgers University Spring 2015 February 9, 2015 2014-2015 Paul Krzyzanowski 1 Thread of execution Single sequence of instructions Pointed to by the program
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
An examination of the dual-core capability of the new HP xw4300 Workstation
An examination of the dual-core capability of the new HP xw4300 Workstation By employing single- and dual-core Intel Pentium processor technology, users have a choice of processing power options in a compact,
Multi-core and Linux* Kernel
Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.
GPUs for Scientific Computing
GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles [email protected] Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research
Multi-core Programming System Overview
Multi-core Programming System Overview Based on slides from Intel Software College and Multi-Core Programming increasing performance through software multi-threading by Shameem Akhter and Jason Roberts,
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
CPU Scheduling Outline
CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different
Choosing a Computer for Running SLX, P3D, and P5
Choosing a Computer for Running SLX, P3D, and P5 This paper is based on my experience purchasing a new laptop in January, 2010. I ll lead you through my selection criteria and point you to some on-line
Parallel Algorithm Engineering
Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis
Introduction to GPU hardware and to CUDA
Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 37 Course outline Introduction to GPU hardware
Design and Implementation of the Heterogeneous Multikernel Operating System
223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics,
More on Pipelining and Pipelines in Real Machines CS 333 Fall 2006 Main Ideas Data Hazards RAW WAR WAW More pipeline stall reduction techniques Branch prediction» static» dynamic bimodal branch prediction
This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture?
This Unit: Putting It All Together CIS 501 Computer Architecture Unit 11: Putting It All Together: Anatomy of the XBox 360 Game Console Slides originally developed by Amir Roth with contributions by Milo
Multicore Processor, Parallelism and Their Performance Analysis
Multicore Processor, Parallelism and Their Performance Analysis I Rakhee Chhibber, II Dr. R.B.Garg I Research Scholar, MEWAR University, Chittorgarh II Former Professor, Delhi School of Professional Studies
GPU Architectures. A CPU Perspective. Data Parallelism: What is it, and how to exploit it? Workload characteristics
GPU Architectures A CPU Perspective Derek Hower AMD Research 5/21/2013 Goals Data Parallelism: What is it, and how to exploit it? Workload characteristics Execution Models / GPU Architectures MIMD (SPMD),
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Chapter 2 Parallel Computer Architecture
Chapter 2 Parallel Computer Architecture The possibility for a parallel execution of computations strongly depends on the architecture of the execution platform. This chapter gives an overview of the general
Chapter 12: Multiprocessor Architectures. Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1
Chapter 12: Multiprocessor Architectures Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1 Objective To understand cache coherence problem To learn the methods used to solve
Introduction to GPU Programming Languages
CSC 391/691: GPU Programming Fall 2011 Introduction to GPU Programming Languages Copyright 2011 Samuel S. Cho http://www.umiacs.umd.edu/ research/gpu/facilities.html Maryland CPU/GPU Cluster Infrastructure
Embedded Parallel Computing
Embedded Parallel Computing Lecture 5 - The anatomy of a modern multiprocessor, the multicore processors Tomas Nordström Course webpage:: Course responsible and examiner: Tomas
Course Development of Programming for General-Purpose Multicore Processors
Course Development of Programming for General-Purpose Multicore Processors Wei Zhang Department of Electrical and Computer Engineering Virginia Commonwealth University Richmond, VA 23284 [email protected]
Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association
Making Multicore Work and Measuring its Benefits Markus Levy, president EEMBC and Multicore Association Agenda Why Multicore? Standards and issues in the multicore community What is Multicore Association?
Chapter 5 Process Scheduling
Chapter 5 Process Scheduling CPU Scheduling Objective: Basic Scheduling Concepts CPU Scheduling Algorithms Why Multiprogramming? Maximize CPU/Resources Utilization (Based on Some Criteria) CPU Scheduling
Chapter 1 Computer System Overview
Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
HyperThreading Support in VMware ESX Server 2.1
HyperThreading Support in VMware ESX Server 2.1 Summary VMware ESX Server 2.1 now fully supports Intel s new Hyper-Threading Technology (HT). This paper explains the changes that an administrator can expect
10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details
Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting
Chapter 1: Introduction. What is an Operating System?
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real -Time Systems Handheld Systems Computing Environments
ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM
ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit
Parallel Computing 37 (2011) 26 41. Contents lists available at ScienceDirect. Parallel Computing. journal homepage: www.elsevier.
Parallel Computing 37 (2011) 26 41 Contents lists available at ScienceDirect Parallel Computing journal homepage: www.elsevier.com/locate/parco Architectural support for thread communications in multi-core
Overview. CPU Manufacturers. Current Intel and AMD Offerings
Central Processor Units (CPUs) Overview... 1 CPU Manufacturers... 1 Current Intel and AMD Offerings... 1 Evolution of Intel Processors... 3 S-Spec Code... 5 Basic Components of a CPU... 6 The CPU Die and
Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child
Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.
Putting it all together: Intel Nehalem. http://www.realworldtech.com/page.cfm?articleid=rwt040208182719
Putting it all together: Intel Nehalem http://www.realworldtech.com/page.cfm?articleid=rwt040208182719 Intel Nehalem Review entire term by looking at most recent microprocessor from Intel Nehalem is code
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer [email protected] Agenda Session Length:
Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27
Logistics Week 1: Wednesday, Jan 27 Because of overcrowding, we will be changing to a new room on Monday (Snee 1120). Accounts on the class cluster (crocus.csuglab.cornell.edu) will be available next week.
Week 1 out-of-class notes, discussions and sample problems
Week 1 out-of-class notes, discussions and sample problems Although we will primarily concentrate on RISC processors as found in some desktop/laptop computers, here we take a look at the varying types
Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run
SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run
Navigating the Enterprise Database Selection Process: A Comparison of RDMS Acquisition Costs Abstract
Navigating the Enterprise Database Selection Process: A Comparison of RDMS Acquisition Costs Abstract Companies considering a new enterprise-level database system must navigate a number of variables which
Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization
Lesson Objectives To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization AE3B33OSD Lesson 1 / Page 2 What is an Operating System? A
CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX
CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX Multiple Choice: 1. Processing information involves: A. accepting information from the outside world. B. communication with another computer. C. performing arithmetic
- Nishad Nerurkar. - Aniket Mhatre
- Nishad Nerurkar - Aniket Mhatre Single Chip Cloud Computer is a project developed by Intel. It was developed by Intel Lab Bangalore, Intel Lab America and Intel Lab Germany. It is part of a larger project,
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest
Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest 1. Introduction Few years ago, parallel computers could
VLIW Processors. VLIW Processors
1 VLIW Processors VLIW ( very long instruction word ) processors instructions are scheduled by the compiler a fixed number of operations are formatted as one big instruction (called a bundle) usually LIW
This Unit: Multithreading (MT) CIS 501 Computer Architecture. Performance And Utilization. Readings
This Unit: Multithreading (MT) CIS 501 Computer Architecture Unit 10: Hardware Multithreading Application OS Compiler Firmware CU I/O Memory Digital Circuits Gates & Transistors Why multithreading (MT)?
Discovering Computers 2011. Living in a Digital World
Discovering Computers 2011 Living in a Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook computers, and mobile devices Identify chips,
Computer Architecture TDTS10
why parallelism? Performance gain from increasing clock frequency is no longer an option. Outline Computer Architecture TDTS10 Superscalar Processors Very Long Instruction Word Processors Parallel computers
Real-Time Operating Systems for MPSoCs
Real-Time Operating Systems for MPSoCs Hiroyuki Tomiyama Graduate School of Information Science Nagoya University http://member.acm.org/~hiroyuki MPSoC 2009 1 Contributors Hiroaki Takada Director and Professor
CPU Session 1. Praktikum Parallele Rechnerarchtitekturen. Praktikum Parallele Rechnerarchitekturen / Johannes Hofmann April 14, 2015 1
CPU Session 1 Praktikum Parallele Rechnerarchtitekturen Praktikum Parallele Rechnerarchitekturen / Johannes Hofmann April 14, 2015 1 Overview Types of Parallelism in Modern Multi-Core CPUs o Multicore
Intel Pentium 4 Processor on 90nm Technology
Intel Pentium 4 Processor on 90nm Technology Ronak Singhal August 24, 2004 Hot Chips 16 1 1 Agenda Netburst Microarchitecture Review Microarchitecture Features Hyper-Threading Technology SSE3 Intel Extended
PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0
PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0 15 th January 2014 Al Chrosny Director, Software Engineering TreeAge Software, Inc. [email protected] Andrew Munzer Director, Training and Customer
Chapter 2 Parallel Architecture, Software And Performance
Chapter 2 Parallel Architecture, Software And Performance UCSB CS140, T. Yang, 2014 Modified from texbook slides Roadmap Parallel hardware Parallel software Input and output Performance Parallel program
CISC, RISC, and DSP Microprocessors
CISC, RISC, and DSP Microprocessors Douglas L. Jones ECE 497 Spring 2000 4/6/00 CISC, RISC, and DSP D.L. Jones 1 Outline Microprocessors circa 1984 RISC vs. CISC Microprocessors circa 1999 Perspective:
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann [email protected] Universität Paderborn PC² Agenda Multiprocessor and
Hardware performance monitoring. Zoltán Majó
Hardware performance monitoring Zoltán Majó 1 Question Did you take any of these lectures: Computer Architecture and System Programming How to Write Fast Numerical Code Design of Parallel and High Performance
Dual-Core Processors on Dell-Supported Operating Systems
Dual-Core Processors on Dell-Supported Operating Systems With the advent of Intel s dual-core, in addition to existing Hyper-Threading capability, there is confusion regarding the number of that software
BLM 413E - Parallel Programming Lecture 3
BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several
Multiprocessor Cache Coherence
Multiprocessor Cache Coherence M M BUS P P P P The goal is to make sure that READ(X) returns the most recent value of the shared variable X, i.e. all valid copies of a shared variable are identical. 1.
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
The Motherboard Chapter #5
The Motherboard Chapter #5 Amy Hissom Key Terms Advanced Transfer Cache (ATC) A type of L2 cache contained within the Pentium processor housing that is embedded on the same core processor die as the CPU
Operating Systems 4 th Class
Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science
Computer Organization
Computer Organization and Architecture Designing for Performance Ninth Edition William Stallings International Edition contributions by R. Mohan National Institute of Technology, Tiruchirappalli PEARSON
VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat Why Computers Are Getting Slower The traditional approach better performance Why computers are
An Implementation Of Multiprocessor Linux
An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than
Thread Level Parallelism II: Multithreading
Thread Level Parallelism II: Multithreading Readings: H&P: Chapter 3.5 Paper: NIAGARA: A 32-WAY MULTITHREADED Thread Level Parallelism II: Multithreading 1 This Unit: Multithreading (MT) Application OS
Real-time processing the basis for PC Control
Beckhoff real-time kernels for DOS, Windows, Embedded OS and multi-core CPUs Real-time processing the basis for PC Control Beckhoff employs Microsoft operating systems for its PCbased control technology.
CMSC 611: Advanced Computer Architecture
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis Parallel Computers Definition: A parallel computer is a collection of processing
IBM CELL CELL INTRODUCTION. Project made by: Origgi Alessandro matr. 682197 Teruzzi Roberto matr. 682552 IBM CELL. Politecnico di Milano Como Campus
Project made by: Origgi Alessandro matr. 682197 Teruzzi Roberto matr. 682552 CELL INTRODUCTION 2 1 CELL SYNERGY Cell is not a collection of different processors, but a synergistic whole Operation paradigms,
Multicore Programming with LabVIEW Technical Resource Guide
Multicore Programming with LabVIEW Technical Resource Guide 2 INTRODUCTORY TOPICS UNDERSTANDING PARALLEL HARDWARE: MULTIPROCESSORS, HYPERTHREADING, DUAL- CORE, MULTICORE AND FPGAS... 5 DIFFERENCES BETWEEN
Testing Database Performance with HelperCore on Multi-Core Processors
Project Report on Testing Database Performance with HelperCore on Multi-Core Processors Submitted by Mayuresh P. Kunjir M.E. (CSA) Mahesh R. Bale M.E. (CSA) Under Guidance of Dr. T. Matthew Jacob Problem
Parallelism and Cloud Computing
Parallelism and Cloud Computing Kai Shen Parallel Computing Parallel computing: Process sub tasks simultaneously so that work can be completed faster. For instances: divide the work of matrix multiplication
Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:
Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations
Windows Server 2008 R2 Hyper V. Public FAQ
Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6
FPGA-based Multithreading for In-Memory Hash Joins
FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded
How To Understand The Concept Of A Distributed System
Distributed Operating Systems Introduction Ewa Niewiadomska-Szynkiewicz and Adam Kozakiewicz [email protected], [email protected] Institute of Control and Computation Engineering Warsaw University of
Symmetric Multiprocessing
Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called
ADVANCED COMPUTER ARCHITECTURE
ADVANCED COMPUTER ARCHITECTURE Marco Ferretti Tel. Ufficio: 0382 985365 E-mail: [email protected] Web: www.unipv.it/mferretti, eecs.unipv.it 1 Course syllabus and motivations This course covers the
Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming
Overview Lecture 1: an introduction to CUDA Mike Giles [email protected] hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.
LSN 2 Computer Processors
LSN 2 Computer Processors Department of Engineering Technology LSN 2 Computer Processors Microprocessors Design Instruction set Processor organization Processor performance Bandwidth Clock speed LSN 2
CSC 2405: Computer Systems II
CSC 2405: Computer Systems II Spring 2013 (TR 8:30-9:45 in G86) Mirela Damian http://www.csc.villanova.edu/~mdamian/csc2405/ Introductions Mirela Damian Room 167A in the Mendel Science Building [email protected]
Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings
Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,
