The Stanford Dash Multiprocessor
|
|
- Damian King
- 7 years ago
- Views:
Transcription
1 EE382V: Computer Architecture: User System Interplay Lecture #17 Department of Electical and Computer Engineering The University of Texas at Austin Monday, 2 April 2007 Disclaimer: The contents of this document are scribe notes for The University of Texas at Austin EE382V Spring 2007, Computer Architecture: User System Interplay. The notes capture the class discussion and may contain erroneous and unverified information and comments. The Stanford Dash Multiprocessor Lecture #17: Monday, 26 March 2007 Lecturer: Mattan Erez Scribe: Carl Olson Reviewer: Mattan Erez, Min Kyu Jeong The Computer Systems Laboratory at Stanford University developed a shared-memory multiprocessor called Dash (an abbreviation for Directory Architecture for Shared Memory). The fundamental premise behind the architecture is that it is possible to build a scalable high-performance machine with a single address space and coherent caches [1]. Dash accomplishes this by using a distributed directory at clusters of processors with a hierarchical bus structure. Although the Dash Multiprocessor was a research investigation of highly parallel architectures, a working prototype with 64 processors was built and tested at Stanford before the publishing of the paper in March Reference: [1] Lenoski et al., The Stanford Dash Multiprocessor, IEEE Computer, Volume 25, Issue 3 (March 1992). 1 What is the problem being solved? Leverage commodity microprocessors and build a scalable high-performance machine with a single address space and coherent caches. Provide scalability that achieves linear or near-linear performance with tens of processors to thousands of processors while maintaining the price/performance advantage of an individual processor. Investigate parallel architectures without imposing excessive programming difficulty. Do not want to complicate the programming model. Copyright 2007 Carl Olson and Mattan Erez, all rights reserved. This work may be reproduced and redistributed, in whole or in part, without prior written permission, provided all copies cite the original source of the document including the names of the copyright holders and The University of Texas at Austin EE382V Spring 2007, Computer Architecture: User System Interplay.
2 2 EE382V: Lecture #17 Cache-coherent shared address machines do not scale effectively. Snooping schemes cause common bus and individual processor caches to saturate. Problems of data partitioning and dynamic load distribution in parallel machines. 2 Who are the intended users? Application Programmers porting sequential applications to parallel machines. Users of current and future parallel applications. General Purpose Processing requiring high total system performance. Users of message passing machines because Dash can emulate message passing machines. 2.1 Intended Readers Operating System developers. Automatically parallelizing compiler writers. Computer architects defining scalable parallel architectures. Researchers developing interconnection networks. Dash claims to be independent, but high point-to-point interconnection, that can t broadcast, increases latency and bandwidth, making it bad for Dash. Interconnection is not explicit in paper, but Dash is not really independent of interconnect type. Parallel language developers to exploit underlying multicore hardware. Researchers interested in evaluation techniques for their parallel architectures. Readers interested in a brief overview of cache coherency and consistency concepts. 3 What is unique about the suggested solution? First operational machine to include scalable cache-coherence. Coherence protocol exploiting partitioned and distributed directories rather than a centralized one. Heirarchical (cluster based) cache and bus model. Combination of point-to-point and bus.
3 EE382V: Lecture #17 3 Distributed memory to exploit locality. Private data can be made local to the cluster to avoid long latency remote reference. Coherence traffic is directory based point-to-point messaging. Directories allow you to send invalidates out to only the caches that need them. Prefetching in directory, update-write and deliver primitives, for handling blocking read-misses. The update-write operation sends the new data directly to all processors that have cached the data, while the deliver operation sends the data to specified clusters [1]. Synchronization techiques: queue-based locks and fetch-and-increment operations. The queue-based locks in Dash use the directory to indicate which processors are spinning on the lock. Fetch-and-increment operations have low serialization and are useful for implementing several synchronization primitives such as barriers, distributed loops, and work queues [1]. Creation of split-transactional bus protocol through masking of remote requests. Request and Reply Meshes eliminate request-reply deadlocks. Does not guarantee deadlock-free operation but deadlock avoidance through remote access cache buffering and NACKs. Shared address space allows users to assume uniform memory model. Supports multiple write invalidation overlaps due to non-blocking writes. 4 How is the idea evaluated? Simulated and built the Dash system. Three applications used to measure performance. Performance Monitoring Modules were used. In order to have lesser memory for the directory, invalidations per shared write is analyzed in just two applications. The results show that the bit-vector based tracking can be replaced with a few pointers and limited broadcasts to keep track of nodes and their accesses. Unloaded read miss latency in no contention is shown. What about number of each hierarchical misses in applications?
4 4 EE382V: Lecture #17 5 Was the evaluation in line with the stated user requirements? Not a very good job of evaluation considering the claims. Too many claims with limited results (at least the ones shown). Would like to see number of local misses, remote misses in applications not just duration of one unloaded read miss. Would like to see how Dash scales with different interconnection networks because the claim is that Dash is independent of interconnection. Would like to see how Dash OS performed on the Dash hardware. Does not really show contention of applications. Some general purpose applications, like MP3D, do not scale well with more processors. Evaluation of programming difficulty not done. Comparison of a message passing machine could have been done to see how it scales against Dash. Evaluation of prefetch mechanism to measure effectiveness and corresponding software level speculation overheads not done. Only three applications, of which MP3D does not scale very linearly (interprocessor communication, remote miss services, queueing delays). Mincut falls off after 64- nodes. Claims are that architecture is designed for scalability, but the conclusion is that it works well for 64 nodes. Could have given memory usage of applications, the size of the memory footprint. Dash was not compared to other machines. The speedup does not have a baseline comparison. Could have given the storage overhead of the whole directory (all distributed directories combined). Evaluation uses harmonic mean. Is this okay or is a weighted harmonic mean needed? Note: when MP3D is left out it increases harmonic mean to 12.8 versus 10.5 with MP3D. Would like to see how Dash scales with a few thousands processors (perhaps with a matrix benchmark).
5 EE382V: Lecture #17 5 Because a real Dash system was built, it is feasible, but this does not prove that it will really scale to thousands of processors. 6 Was technology a factor in the problem or solution? Claim is solution is independent of interconnection network. Though not directly stated, better technology means lesser network delays. More processors can fit with better technology, and more processors can do more work. 7 Were new tools of software techniques introduced? Started work on Jade new parallel language for programmers to easily express parallelism. Dash OS: Modified version of Irix kernel with support for prefetch, update-write, queue-based locks, as opposed to a primitive kernel. Supports a parallel macro library for structured access to underlying hardware and O/S. A suite of high-level and low-level performance monitoring and analysis tools. Parallelizing compiler that does static data partitioning to improve locality, that exploits prefetching. Fetch-Increment, Update-write and Deliver Primitives. Performance monitoring tools can identify portions of code where the concurrency is smallest or where the most execution time is spent. Note: software was not the focus of the paper. 8 How may users with other requirements be affected? Relaxed consistency model is difficult to program. This implies users are given the responsibility of ensuring correctness of implementation.
6 6 EE382V: Lecture #17 Queue-based lock chooses requester at random. Guarantees fairness but no priority can be enforced. Low contention applications may suffer software overheads due to queueing and prefetching, affecting performance. Directory memory does not scale very well due to bit-vector based management. Restricts machines with very large number of processors because of the cost produced and area taken by the directories. Since directory and interconnect costs are amortized over larger number of processors, programs with limited parallelism will have to pay these costs. Highly contended synchronization objects and heavily shared data objects create a hotspot problem that reduces network and memory throughput. Prefetch operations not very intuitive for inclusion by users in all applications, reducing programmability. Modification of existing code to include primitives to exploit hardware, limits backward compatibility. Applications exhibiting poor cache behavior or extensive read/write sharing could have significant delays during remote cache misses. Nonuniform memory accesses affect programming. With shared memory, programmers do not need to keep track of data. In message passing, programmers have to track data through programs. 9 How can the discussion of this paper be generalized in the context of the class? Scalable cache coherence is exhibited by the Dash parallel architecture. another example of supercomputing. This is Paper that addresses key aspect, scalability of parallel systems. Earlier papers were focussed on network delays and communication. Dash took something not usually scalable and made it scalable. A paper that identifies problems in its own architecture and addresses mechanisms to handle all of them. Paper claims to be enormously scalable but evaluation tells that it is good for 64 nodes.
7 EE382V: Lecture #17 7 A paper that explains problems in designing for scalability, network scalability, software and hardware techniques to overcome the problems. 10 Summary and Additional Notes The Dash multiprocesor parallel system showed a method for scalable cache coherence through the use of distributed directories. It also maintained a single shared address space to make the system easier to program. Several techiques were introduced for synchronization and bus architecture. At the time the paper was written, 64 processors was a very large system. However, the evaluation of Dash in this paper provided no evidence that Dash could scale to thousands of processors, as it claimed. The evaluation also failed to compare Dash to other systems, despite the performance evaluations done on Dash. 11 Further Discussion Intel and AMD multiprocessors are single address and cache coherent, but they will only scale to 16 or maybe 32 processor cores. The Dash paper did not evaluate a thousand or more processor case, but do we even need these manycore systems. Buses do not scale up to a thousand processors. Also, directories will get so large that we will not be able to put them in main memory because the many processors will be trying to get to directories. This means separate structures for directories will be needed. A hierarchy of network is a good idea that is feasible with packaging technology. Packaging technology influences architecture because the closer things are together, the faster they are, generally. So the way things are packaged affects the network. Large databases like machines with shared memory because database applications do not know where a large number of the data elements are which they are working on.
Vorlesung Rechnerarchitektur 2 Seite 178 DASH
Vorlesung Rechnerarchitektur 2 Seite 178 Architecture for Shared () The -architecture is a cache coherent, NUMA multiprocessor system, developed at CSL-Stanford by John Hennessy, Daniel Lenoski, Monica
More informationMultiprocessor Cache Coherence
Multiprocessor Cache Coherence M M BUS P P P P The goal is to make sure that READ(X) returns the most recent value of the shared variable X, i.e. all valid copies of a shared variable are identical. 1.
More informationClient/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
More informationEE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000. ILP Execution
EE482: Advanced Computer Organization Lecture #11 Processor Architecture Stanford University Wednesday, 31 May 2000 Lecture #11: Wednesday, 3 May 2000 Lecturer: Ben Serebrin Scribe: Dean Liu ILP Execution
More informationHigh Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
More informationCentralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures
Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do
More information18-742 Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two
age 1 18-742 Lecture 4 arallel rogramming II Spring 2005 rof. Babak Falsafi http://www.ece.cmu.edu/~ece742 write X Memory send X Memory read X Memory Slides developed in part by rofs. Adve, Falsafi, Hill,
More informationIntroduction to Cloud Computing
Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic
More informationWrite a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical
Identify a problem Review approaches to the problem Propose a novel approach to the problem Define, design, prototype an implementation to evaluate your approach Could be a real system, simulation and/or
More informationADVANCED COMPUTER ARCHITECTURE: Parallelism, Scalability, Programmability
ADVANCED COMPUTER ARCHITECTURE: Parallelism, Scalability, Programmability * Technische Hochschule Darmstadt FACHBEREiCH INTORMATIK Kai Hwang Professor of Electrical Engineering and Computer Science University
More informationMulti-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007
Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer
More informationSynchronization. Todd C. Mowry CS 740 November 24, 1998. Topics. Locks Barriers
Synchronization Todd C. Mowry CS 740 November 24, 1998 Topics Locks Barriers Types of Synchronization Mutual Exclusion Locks Event Synchronization Global or group-based (barriers) Point-to-point tightly
More informationParallel Processing and Software Performance. Lukáš Marek
Parallel Processing and Software Performance Lukáš Marek DISTRIBUTED SYSTEMS RESEARCH GROUP http://dsrg.mff.cuni.cz CHARLES UNIVERSITY PRAGUE Faculty of Mathematics and Physics Benchmarking in parallel
More informationx64 Servers: Do you want 64 or 32 bit apps with that server?
TMurgent Technologies x64 Servers: Do you want 64 or 32 bit apps with that server? White Paper by Tim Mangan TMurgent Technologies February, 2006 Introduction New servers based on what is generally called
More informationScalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
More informationLecture 23: Multiprocessors
Lecture 23: Multiprocessors Today s topics: RAID Multiprocessor taxonomy Snooping-based cache coherence protocol 1 RAID 0 and RAID 1 RAID 0 has no additional redundancy (misnomer) it uses an array of disks
More informationCOSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network
More informationGEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications
GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications Harris Z. Zebrowitz Lockheed Martin Advanced Technology Laboratories 1 Federal Street Camden, NJ 08102
More informationDesign and Implementation of the Heterogeneous Multikernel Operating System
223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics,
More informationHow To Build A Cloud Computer
Introducing the Singlechip Cloud Computer Exploring the Future of Many-core Processors White Paper Intel Labs Jim Held Intel Fellow, Intel Labs Director, Tera-scale Computing Research Sean Koehl Technology
More informationOperating Systems 4 th Class
Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science
More informationChapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup
Chapter 12: Multiprocessor Architectures Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup Objective Be familiar with basic multiprocessor architectures and be able to
More informationSymmetric Multiprocessing
Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called
More informationCOMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters
COMP5426 Parallel and Distributed Computing Distributed Systems: Client/Server and Clusters Client/Server Computing Client Client machines are generally single-user workstations providing a user-friendly
More informationIntroduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.
Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds
More informationA Comparison Of Shared Memory Parallel Programming Models. Jace A Mogill David Haglin
A Comparison Of Shared Memory Parallel Programming Models Jace A Mogill David Haglin 1 Parallel Programming Gap Not many innovations... Memory semantics unchanged for over 50 years 2010 Multi-Core x86
More informationChapter 18: Database System Architectures. Centralized Systems
Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and
More informationScalability evaluation of barrier algorithms for OpenMP
Scalability evaluation of barrier algorithms for OpenMP Ramachandra Nanjegowda, Oscar Hernandez, Barbara Chapman and Haoqiang H. Jin High Performance Computing and Tools Group (HPCTools) Computer Science
More informationNext Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
More informationThe Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud.
White Paper 021313-3 Page 1 : A Software Framework for Parallel Programming* The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud. ABSTRACT Programming for Multicore,
More informationA Lab Course on Computer Architecture
A Lab Course on Computer Architecture Pedro López José Duato Depto. de Informática de Sistemas y Computadores Facultad de Informática Universidad Politécnica de Valencia Camino de Vera s/n, 46071 - Valencia,
More information21152 PCI-to-PCI Bridge
Product Features Brief Datasheet Intel s second-generation 21152 PCI-to-PCI Bridge is fully compliant with PCI Local Bus Specification, Revision 2.1. The 21152 is pin-to-pin compatible with Intel s 21052,
More informationCOMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook)
COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) Vivek Sarkar Department of Computer Science Rice University vsarkar@rice.edu COMP
More informationParallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes
Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes Eric Petit, Loïc Thebault, Quang V. Dinh May 2014 EXA2CT Consortium 2 WPs Organization Proto-Applications
More informationCOSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel I/O (I) I/O basics Fall 2012 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network card 1 Network card
More informationSHARED HASH TABLES IN PARALLEL MODEL CHECKING
SHARED HASH TABLES IN PARALLEL MODEL CHECKING IPA LENTEDAGEN 2010 ALFONS LAARMAN JOINT WORK WITH MICHAEL WEBER AND JACO VAN DE POL 23/4/2010 AGENDA Introduction Goal and motivation What is model checking?
More informationKEYWORDS. Control Systems, Urban Affairs, Transportation, Telecommunications, Distributed Processors. ABSTRACT
TRAFFIC TELEMATICS SOFTWARE ENVIRONMENT E. Peytchev, A. Bargiela. Real Time Telemetry Systems - Simulation and Modelling Group, Department of Computing The Nottingham Trent University, Burton Street, Nottingham,
More informationThe Orca Chip... Heart of IBM s RISC System/6000 Value Servers
The Orca Chip... Heart of IBM s RISC System/6000 Value Servers Ravi Arimilli IBM RISC System/6000 Division 1 Agenda. Server Background. Cache Heirarchy Performance Study. RS/6000 Value Server System Structure.
More informationChapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines Operating System Concepts 3.1 Common System Components
More informationBLM 413E - Parallel Programming Lecture 3
BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several
More informationA Review of Customized Dynamic Load Balancing for a Network of Workstations
A Review of Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer Science Department, University of Rochester
More informationImproving System Scalability of OpenMP Applications Using Large Page Support
Improving Scalability of OpenMP Applications on Multi-core Systems Using Large Page Support Ranjit Noronha and Dhabaleswar K. Panda Network Based Computing Laboratory (NBCL) The Ohio State University Outline
More informationLecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.
Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide
More informationParallel Computing. Benson Muite. benson.muite@ut.ee http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite benson.muite@ut.ee http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
More informationThe Oracle Universal Server Buffer Manager
The Oracle Universal Server Buffer Manager W. Bridge, A. Joshi, M. Keihl, T. Lahiri, J. Loaiza, N. Macnaughton Oracle Corporation, 500 Oracle Parkway, Box 4OP13, Redwood Shores, CA 94065 { wbridge, ajoshi,
More informationOutline. Introduction. Multiprocessor Systems on Chip. A MPSoC Example: Nexperia DVP. A New Paradigm: Network on Chip
Outline Modeling, simulation and optimization of Multi-Processor SoCs (MPSoCs) Università of Verona Dipartimento di Informatica MPSoCs: Multi-Processor Systems on Chip A simulation platform for a MPSoC
More informationOverlapping Data Transfer With Application Execution on Clusters
Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer
More informationChapter 1 Computer System Overview
Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides
More informationParallel Algorithm Engineering
Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis
More informationOperating system Dr. Shroouq J.
3 OPERATING SYSTEM STRUCTURES An operating system provides the environment within which programs are executed. The design of a new operating system is a major task. The goals of the system must be well
More informationChapter 12: Multiprocessor Architectures. Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1
Chapter 12: Multiprocessor Architectures Lesson 09: Cache Coherence Problem and Cache synchronization solutions Part 1 Objective To understand cache coherence problem To learn the methods used to solve
More informationMuse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
More informationCHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.
More informationAchieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
More informationVector Architectures
EE482C: Advanced Computer Organization Lecture #11 Stream Processor Architecture Stanford University Thursday, 9 May 2002 Lecture #11: Thursday, 9 May 2002 Lecturer: Prof. Bill Dally Scribe: James Bonanno
More informationChapter 2: OS Overview
Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:
More informationRouteBricks: A Fast, Software- Based, Distributed IP Router
outebricks: A Fast, Software- Based, Distributed IP outer Brad Karp UCL Computer Science (with thanks to Katerina Argyraki of EPFL for slides) CS GZ03 / M030 18 th November, 2009 One-Day oom Change! On
More informationLesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization
Lesson Objectives To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization AE3B33OSD Lesson 1 / Page 2 What is an Operating System? A
More information- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
More informationThe SpiceC Parallel Programming System of Computer Systems
UNIVERSITY OF CALIFORNIA RIVERSIDE The SpiceC Parallel Programming System A Dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science
More informationVirtual machine interface. Operating system. Physical machine interface
Software Concepts User applications Operating system Hardware Virtual machine interface Physical machine interface Operating system: Interface between users and hardware Implements a virtual machine that
More informationCourse Development of Programming for General-Purpose Multicore Processors
Course Development of Programming for General-Purpose Multicore Processors Wei Zhang Department of Electrical and Computer Engineering Virginia Commonwealth University Richmond, VA 23284 wzhang4@vcu.edu
More informationAn Implementation Of Multiprocessor Linux
An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than
More informationMulti-Threading Performance on Commodity Multi-Core Processors
Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction
More informationDistributed Systems. REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1
Distributed Systems REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1 1 The Rise of Distributed Systems! Computer hardware prices are falling and power increasing.!
More informationTCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance
TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented
More informationUnderstanding Hardware Transactional Memory
Understanding Hardware Transactional Memory Gil Tene, CTO & co-founder, Azul Systems @giltene 2015 Azul Systems, Inc. Agenda Brief introduction What is Hardware Transactional Memory (HTM)? Cache coherence
More informationPerformance Monitoring of Parallel Scientific Applications
Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure
More informationCellular Computing on a Linux Cluster
Cellular Computing on a Linux Cluster Alexei Agueev, Bernd Däne, Wolfgang Fengler TU Ilmenau, Department of Computer Architecture Topics 1. Cellular Computing 2. The Experiment 3. Experimental Results
More informationAchieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
More informationLinux Driver Devices. Why, When, Which, How?
Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may
More informationLoad Balancing in Distributed Data Base and Distributed Computing System
Load Balancing in Distributed Data Base and Distributed Computing System Lovely Arya Research Scholar Dravidian University KUPPAM, ANDHRA PRADESH Abstract With a distributed system, data can be located
More informationSupercomputing applied to Parallel Network Simulation
Supercomputing applied to Parallel Network Simulation David Cortés-Polo Research, Technological Innovation and Supercomputing Centre of Extremadura, CenitS. Trujillo, Spain david.cortes@cenits.es Summary
More informationA Locally Cache-Coherent Multiprocessor Architecture
A Locally Cache-Coherent Multiprocessor Architecture Kevin Rich Computing Research Group Lawrence Livermore National Laboratory Livermore, CA 94551 Norman Matloff Division of Computer Science University
More informationOutline. Distributed DBMS
Outline Introduction Background Architecture Distributed Database Design Semantic Data Control Distributed Query Processing Distributed Transaction Management Data server approach Parallel architectures
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More informationResource Utilization of Middleware Components in Embedded Systems
Resource Utilization of Middleware Components in Embedded Systems 3 Introduction System memory, CPU, and network resources are critical to the operation and performance of any software system. These system
More informationDirect NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
More informationBBM467 Data Intensive ApplicaAons
Hace7epe Üniversitesi Bilgisayar Mühendisliği Bölümü BBM467 Data Intensive ApplicaAons Dr. Fuat Akal akal@hace7epe.edu.tr FoundaAons of Data[base] Clusters Database Clusters Hardware Architectures Data
More informationWhite Paper The Numascale Solution: Extreme BIG DATA Computing
White Paper The Numascale Solution: Extreme BIG DATA Computing By: Einar Rustad ABOUT THE AUTHOR Einar Rustad is CTO of Numascale and has a background as CPU, Computer Systems and HPC Systems De-signer
More informationCMSC 611: Advanced Computer Architecture
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis Parallel Computers Definition: A parallel computer is a collection of processing
More informationExploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand
Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based
More informationStudy of Various Load Balancing Techniques in Cloud Environment- A Review
International Journal of Computer Sciences and Engineering Open Access Review Paper Volume-4, Issue-04 E-ISSN: 2347-2693 Study of Various Load Balancing Techniques in Cloud Environment- A Review Rajdeep
More informationBenchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
More informationChapter 11 I/O Management and Disk Scheduling
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization
More informationnumascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT
numascale Hardware Accellerated Data Intensive Computing White Paper The Numascale Solution: Extreme BIG DATA Computing By: Einar Rustad www.numascale.com Supemicro delivers 108 node system with Numascale
More informationChapter 16 Distributed Processing, Client/Server, and Clusters
Operating Systems: Internals and Design Principles Chapter 16 Distributed Processing, Client/Server, and Clusters Eighth Edition By William Stallings Table 16.1 Client/Server Terminology Applications Programming
More informationLecture 2 Parallel Programming Platforms
Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple
More informationTechnical Report. Communication centric, multi-core, fine-grained processor architecture. Gregory A. Chadwick. Number 832.
Technical Report UCAM-CL-TR-832 ISSN 1476-2986 Number 832 Computer Laboratory Communication centric, multi-core, fine-grained processor architecture Gregory A. Chadwick April 2013 15 JJ Thomson Avenue
More informationADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM
ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit
More informationSystem Software and TinyAUTOSAR
System Software and TinyAUTOSAR Florian Kluge University of Augsburg, Germany parmerasa Dissemination Event, Barcelona, 2014-09-23 Overview parmerasa System Architecture Library RTE Implementations TinyIMA
More informationChapter 2 Parallel Architecture, Software And Performance
Chapter 2 Parallel Architecture, Software And Performance UCSB CS140, T. Yang, 2014 Modified from texbook slides Roadmap Parallel hardware Parallel software Input and output Performance Parallel program
More informationPetascale Software Challenges. Piyush Chaudhary piyushc@us.ibm.com High Performance Computing
Petascale Software Challenges Piyush Chaudhary piyushc@us.ibm.com High Performance Computing Fundamental Observations Applications are struggling to realize growth in sustained performance at scale Reasons
More informationCray DVS: Data Virtualization Service
Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with
More informationRevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
More informationBig data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
More informationPurpose... 3. Computer Hardware Configurations... 6 Single Computer Configuration... 6 Multiple Server Configurations... 7. Data Encryption...
Contents Purpose... 3 Background on Keyscan Software... 3 Client... 4 Communication Service... 4 SQL Server 2012 Express... 4 Aurora Optional Software Modules... 5 Computer Hardware Configurations... 6
More informationWeb Email DNS Peer-to-peer systems (file sharing, CDNs, cycle sharing)
1 1 Distributed Systems What are distributed systems? How would you characterize them? Components of the system are located at networked computers Cooperate to provide some service No shared memory Communication
More informationBig Data With Hadoop
With Saurabh Singh singh.903@osu.edu The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
More informationPreserving Message Integrity in Dynamic Process Migration
Preserving Message Integrity in Dynamic Process Migration E. Heymann, F. Tinetti, E. Luque Universidad Autónoma de Barcelona Departamento de Informática 8193 - Bellaterra, Barcelona, Spain e-mail: e.heymann@cc.uab.es
More information