Introduction to High-Performance Computing and the Supercomputing Institute

Size: px
Start display at page:

Download "Introduction to High-Performance Computing and the Supercomputing Institute"

Transcription

1 Introduction to High-Performance Computing and the Supercomputing Institute Carlos P Sosa Cray, Inc. and Biomedical Informatics and Computational Biology University of Minnesota Rochester

2 Agenda What is High-Performance Computing? Floating-Point Operations Cost per Gflops High-Performance Computing Historical Perspective Introduction to MSI facilities Show you how to access our systems Point you to where to go for help Brief introduction to Linux/UNIX and some useful commands Ensure that you are not overwhelmed Encourage you to ask questions of MSI staff to get what you need 2

3 What is High-Performance Computing? High-performance computing (HPC) uses supercomputers to solve advanced computation problems Today, computer systems approaching the teraflops-region are counted as HPC-computers Measure of a computer's processor speed. This speed can be expressed as a trillion floating point operations per second, 1012 floating-point operations per second 3

4 International System of Units April 13 Cray Inc. 4

5 Floating-Point Operations FLOPS (or flops or flop/s, for floating-point operations per second) Measure of a computer's performance Floating-point is a method of representing real numbers S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration Flop count is a count of these operations in a given section of a computer program FLOPS is not an SI unit 5

6 Why Floating Point significant digits x base exponent radix point (decimal point, or, more commonly in computers, binary point) can "float or be shifted left or right 6

7 How to Calculate Mflops? 10 7

8 What s a petaflop? One quadrillion calculations per second! If you multiplied two 14-digit numbers together per second: 32 years to complete 1 billion calculations. 32 thousand years to complete 1 trillion calculations. 32 million years to complete 1 quadrillion calculations. 32 years ago, Star Wars was released 32 thousand years ago, early cave paintings were completed 32 million years ago, the Alps were rising in Europe Courtesy of NCSA: Blue Waters Project

9 Binary versus Decimal 2 10 is very nearly equal to 1000 and started using the SI prefix "kilo" to mean 1024 Everybody does not "know" what a megabyte is Computer memory, most manufacturers use megabyte to mean 2 20 = bytes Manufacturers of computer storage devices usually use the term to mean bytes Local area networks have used megabit per second to mean bit/s Telecommunications engineers use it to mean 10 6 bit/s Two definitions of the megabyte are not enough, a third megabyte of bytes is the megabyte used to format the familiar 90 mm (3 1/2 inch), "1.44 MB" diskette The confusion is real, as is the potential for incompatibility in standards and in implemented systems. 9

10 Binary Notation April 13 Cray Inc. 10

11 Historical Change in Cost per GFlops April 13 Cray Inc. 11

12 HPC Historical Perspective High-Performance Computers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC) Led the market into the 1970s Founded Cray Research Big irons dominated the market ( ) 1980s the decade of the minicomputer Mid-1990s "supercomputer market crash" 12

13 Big Irons Source: Cray-2 April 13 Cray Inc. 13

14 Supercomputers Peak Performance April 13 ryid=21365 From Kilobytes to Petabytes in 50 Years: Cray Inc. 14

15 Why Should I Care About HPC? April 13, 2009 The Looming Petascale Chemists gear up for a new generation of supercomputers The new petascale computers will be 1,000 times faster than the terascale supercomputers of today, performing more than 1,000 trillion operations per second. And instead of machines with thousands of processors, petascale machines will have many hundreds of thousands that simultaneously process streams of information. This technological sprint could be a great boon for chemists, allowing them to computationally explore the structure and behavior of bigger and more complex molecules. April 13 Cray Inc. 15

16 Towards More Realistic Systems April 13 Enabling and Scaling Biomolecular Simulations of 100 Million Atoms on Petascale Machines with a Multicore-optimized Message-driven Runtime, Chao Mei, Yanhua Sun, Gengbin Zheng, Eric J. Bohm, Laxmikant V. Kale, James C.Phillips, Chris Harrison, SC 11 November 12-18, 2011, Seattle, Washington Cray Inc. 16

17 Introduction to the Minnesota Supercomputing Institute (MSI)

18 MSI At a Glance Manages approximately 400 software packages and 50 research databases. Main data center in Walter Library. MSI operates five laboratories on campus, mostly serving the Life Sciences. Serves more than 500 PI groups with users. MSI is an academic unit of the University of Minnesota (under OVPR) with about 45 full time employees. Dedicated resources for serial workflows, database management, and cloud access.

19 BSCL MSI Offices HPC Resources Labs

20 Eligibility Faculty members at the University of Minnesota University of Minnesota academic professionals Faculty researchers at other accredited institutions of post-secondary education in the state of Minnesota Some software may not be available to users not affiliated with UMN

21 MSI Resources HPC Resources Koronis Itasca Calhoun Cascade GPUT Laboratories BMSDL BSCL CGL SDVL LMVL Software Chemical and Physical Sciences Engineering Graphics and Visualization Life Sciences Development Tools User Services Consulting Tutorials Code Porting Parallelization Visualization

22 Access System access information is available at mymsi for additional resources For building access, contact x Information can be found in the General Information page for each lab ( ) For help, contact [email protected] x6-0802

23 Remote Access to MSI lab Linux systems Cannot remotely login directly to most machines at MSI Two options to connect to MSI lab systems ssh login.msi.umn.edu NX - Requires user to download and install client Cannot run software directly from login or NX isub

24 HPC Resources Koronis: SGI Altix 1140 Intel Nehalem Cores 2.96 TB of memory Itasca: Hewlett-Packard 3000BL 8728 Intel Nehalem Cores 26 TB of memory Calhoun: SGI Altix XE Intel Xeon Clovertown Cores 2.8 TB of memory Cascade: 8 Dell Compute Nodes 32 Nvidia M2070 GPGPUs GPUT: 16 Nvidia GeForce GTX 480 GPUs

25 Laboratories Scientific Development and Visualization Application development Computational Physics Workshops Basic Sciences Computing Structural Biology Batch processing Stereo Projection system Computational Genetics Bioinformatics Genomics Proteomics Biomedical Modeling, Simulation, and Design Drug Design Molecular Modeling Computational chemistry LCSE-MSI Visualization Virtual reality Large screen Remote visualization

26 SDVL LMVL BSCL BSCL

27 BMSDL

28 St. Paul Campus CGL

29 Software Approximately 400 software applications Classified into three service levels Primary Ancillary Minimal Purchase justification required

30 Disk Resources Labs and HPC systems (other than Koronis) have shared home directories Disk space is allocated to a group 50 GB initial quota on Koronis. Every lab and HPC machine has access to at least 1 TB of scratch space On HPC systems your scratch quota is 50 GB You may request a quota increase if necessary. PIs may request project space for large data and group collaboration.

31 Tutorials/Workshops Introductory Unix, Linux, remote computing, job submission, queue policy Programming & Scientific Computation Code parallelization, programming languages, math libraries Computational Physics Fluid dynamics, space physics, structural mechanics, material science Computational Chemistry Quantum chemistry, classical molecular modeling, drug design, cheminformatics Computational Biology Structural biology, computational genomics, proteomics, bioinformatics

32 Support MSI is a resource dedicated to enabling and improving the quality of computational research at the University of Minnesota Basic research support is made available to researchers at minimal or no cost MSI participates with researchers who are seeking funding for projects that require MSI consultants or developers

33 Projects Internal Services Assist with long term development projects Wide range of project activities Tailored to specific research initiatives or development programs At cost External Services Acknowledgement of MSI in publications / grant proposals is requested.

34 Integrate MSI Resources into Course Work Class accounts Access to MSI on a semester basis for all students in a course Customized workshops and guest lectures When our expertise overlaps with your interests, e.g., parallel programming, integration of computational assignments in a field Reserve the LMVL Use the stereoscopic capabilities of the visualization lab for high impact structure and model visualization lectures

35 MSI web pages MSI home page Software Password reset Tutorials FAQ

36 Why Modules? Modules environment management package provides support for dynamic modification of the user environment via modulefiles Each modulefile contains all the information needed to configure the shell for a particular application Advantage: You are not required to specify explicit paths for different executable versions or to set the $MANPATH and other environment variables manually. All the information is embedded in the modulefile Modules can be loaded and unloaded dynamically and atomically, in an clean fashion. All popular shells are supported, including bash, ksh, zsh, sh, csh, tcsh, as well as some scripting languages such as perl As a user, you can add and remove modulefiles from your current shell environment The environment changes performed by a modulefile can be viewed by using the module command as well 36

37 Module Commands Command module list module avail module avail -l module display module load module unload module help module swap Description Lists modules currently loaded in a user s environment Lists all available modules on a system in condensed format Lists all available modules on a system in long format Shows environment changes that will be made by loading a given module Loads a module Unloads a module Shows help for a module Swaps a currently loaded module for an unloaded module 37

38 What is Loaded Now? module list Currently Loaded Modulefiles: 1) local 3) user 5) torque 7) base 2) vars 4) moab 6) suacct 38

39 Re-Initializing the Module Command Modules software functionality is highly dependent upon the shell environment being used Sometimes when switching between shells, modules must be reinitialized For example, you might see an error such as the following: $ module list -bash: module: command not found To fix this, just re-initialize your modules environment: $ source $MODULESHOME/init/myshell Where myshell is the name of the shell you are using and need to re-initialize 39

40 What is Available? To see which modulefiles are available on your system, enter this command: % module avail [string] The module avail command produces an alphabetical listing of every modulefile in your module use path and has no option for "grepping." Therefore, it is usually more useful to use the command with an string argument cpsosa@silver:~> module avail gcc /soft/modules/modulefiles gcc/4.3.3 gcc/4.4.0 gcc/4.4.3(default) 40

41 Loading and Unloading Modules If a modulefile is not already loaded, use the module load command to load it. % module load modulefile This command loads the currently defined default version of the module, unless you specify otherwise cpsosa@silver:~> module list Currently Loaded Modulefiles: 1) local 3) user 5) torque 7) base 2) vars 4) moab 6) suacct cpsosa@silver:~> module load dock/6.3 cpsosa@silver:~> module list Currently Loaded Modulefiles: 1) local 4) moab 7) base 10) ompi/1.2.9/xl 2) vars 5) torque 8) xlc/ ) dock/6.3 3) user 6) suacct 9) xlf/12.1 cpsosa@silver:~> module unload dock/6.3 41

42 Module Swapping Alternatively, you can use the module swap or module switch command to unload one module and load the comparable module module load gcc/4.3.3 module list Currently Loaded Modulefiles: 1) local 3) user 5) torque 7) base 2) vars 4) moab 6) suacct 8) gcc/4.3.3 module swap gcc/4.4.0 module list Currently Loaded Modulefiles: 1) local 3) user 5) torque 7) base 2) vars 4) moab 6) suacct 8) gcc/

43 Lab 1: Test Modules 1.$ssh itasca 2.$module list 3. Type: $g09 4. Type: module load g09 5. Type: g09 43

44 Portable Batch System (PBS) The Portable Batch System (PBS) is a queuing system installed for job batch processing It matches job requirements with available resources It ensures that machines are fully used and resources are distributed among all users In contrast to the HPC system queues, the Lab system queues do not require Service Units (SUs) in order for jobs to run 44

45 Lab Compute Nodes Node Details Nodes Model CPU cores Memory labh01 labh02 labh03 - labh08 PowerEdge R900 SunFire X4440 SunFire X Intel Xeon 2.67 GHz 16 AMD Opteron GHz 32 AMD Opteron GHz 128GB 128GB 128GB mirror1-mirror16 Altix XE Intel Xeon 2.66GHz 16GB lab001-lab064 Altix XE Intel Xeon 2.66GHz 16GB laboc01-laboc04 LiquidCool Liquid Submerged 12 Intel Xeon 4.1GHz 24GB 45

46 How to Create a Job Submission Script 1. Log in to lab.msi.umn.edu 2. Write a PBS batch script 3. Example: 1. 1-hour job to run on a single processor of a single node 2. 1gb of memory 3. Load any preferred modules 4. run any software that can operate in batch mode #!/bin/bash -l #PBS -l nodes=1:ppn=1,mem=1gb,walltime=01:00:00 #PBS -m abe cd /home/msi/username/testpbs module load intel./test < input.dat > output.dat 46

47 How to Submit a Job Use the command qsub to submit a job to the queuing system qsub takes a job submission script that contains special commands telling PBS what resources are needed It also contains the commands necessary to run the submitted job. Example: $qsub script.pbs To submit to a different queue: $qsub -q lab-long script.pbs 47

48 How to Check Job Status You can check job status using the showq command $showq -u username 48

49 Lab1: submit a very simple script Go to the scratch file system: $cd /scratch Create a directory: $mkdir myaccount Move to that directory: $cd myaccount Create a directory:$mkdir test Move to that directory: $cd test Copy file: $Cp ~cpsosa/gaussian/test/*. Submit your first script: $Qsub test.pbs 49

50 Example: Electronic Structure Calculation 50

51 Schrödinger Equation N-body wavefunction Energy of either the ground or an excited state of the system charges of the individual particles Spatial positions 51

52 Gaussian09 an Electronic Structure Suite of Programs Gaussian 09: is an electronic structure package capable of predicting many properties of atoms, molecules, and reactions, e.g. molecular energies and structures vibrational frequencies molecular orbitals and much more utilizing ab initio, density functional theory, semiempirical, molecular mechanics, and hybrid methods. 52

53 Gaussian History Gaussian70, Gaussian76, Gaussian77, Gaussian78, Gaussian80, Gaussian82, Gaussian83, Gaussian85, Gaussian86, Gaussian88, Gaussian90, Gaussian 92, Gaussian93, Gaussian 94, Gaussian95, Gaussian96, Gaussian 98, Gaussian 03, Gaussian 09 Born Died Nationality Fields Alma mater Doctoral advisor John Pople 31 October 1925 Burnham-on-Sea, Somerset, England March 15, 2004 (aged 78) Chicago, Illinois, United States England Theoretical chemistry Quantum chemistry Computational chemistry Cambridge University John Lennard-Jones Notable awards Nobel Prize in Chemistry (1998) 53

54 Input Format % Resource management # Route card Title section Molecular coordinates Geometric variables Other input options blank line blank line blank line blank line blank line 54

55 Input File %mem=32mb #p hf/6-31g opt hf/6-31g optimization of water 0 1 o h 1 oh h 1 oh 2 aoh oh=0.9 aoh=104.0 system resources computational model type of calculation title charge & multiplicity structure definition (z-matrix) variable values 55

56 Route Card Description: specifies keywords and options always begins with a # character keywords can be specified in any order options are grouped in parentheses, () keywords should not be repeated route section can be up to 5 lines long ended with a blank line Syntax: #[p] method/basis [keyword=(opt1,opt2)] [keyword2=(opt1[,opt2])] 56

57 Type of Calculation single point energy and properties geometry optimization reaction path following/searching frequency 57

58 Computational Method Level of theory: molecular mechanics amber, dreiding, uff semi-empirical am1, pm3, mndo, density functional theory b3lyp, mpwpw91, custom ab initio hf, mp2, ccsd, qcisd, hybrid g2, g3, oniom, 58

59 Lab3: Run Gaussian09 1. Copy files: $cp ~cpsosa/gaussian/g09/*. 2. Build your h2o.com input file 3. Build your h2o.pbs 4. Submit your h2o.pbs to the QUE 5. Visualize results 59

60 Lab 4: Visualize Results 1. Load the gaussian module if you do not have already loaded: $module load gaussian 2. Run GaussView by typing: $gview 60

61 Questions? MSI help desk is staffed Monday through Friday from 8:30AM to 7:00PM Walk-in help available in room 569 Walter Phone

62 Thank You! Many thanks to: Nancy Rowe for providing The MSI slides MSI Support

Introduction to MSI* for PubH 8403

Introduction to MSI* for PubH 8403 Introduction to MSI* for PubH 8403 Sep 30, 2015 Nancy Rowe *The Minnesota Supercomputing Institute for Advanced Computational Research Overview MSI at a Glance MSI Resources Access System Access - Physical

More information

Intel Xeon Phi Basic Tutorial

Intel Xeon Phi Basic Tutorial Intel Xeon Phi Basic Tutorial Evan Bollig and Brent Swartz 1pm, 12/19/2013 Overview Intro to MSI Intro to the MIC Architecture Targeting the Xeon Phi Examples Automatic Offload Offload Mode Native Mode

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

icer Bioinformatics Support Fall 2011

icer Bioinformatics Support Fall 2011 icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

PrimeRail Installation Notes Version A-2008.06 June 9, 2008 1

PrimeRail Installation Notes Version A-2008.06 June 9, 2008 1 PrimeRail Installation Notes Version A-2008.06 June 9, 2008 1 These installation notes present information about installing PrimeRail version A-2008.06 in the following sections: Media Availability and

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

24/08/2004. Introductory User Guide

24/08/2004. Introductory User Guide 24/08/2004 Introductory User Guide CSAR Introductory User Guide Introduction This material is designed to provide new users with all the information they need to access and use the SGI systems provided

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) ( TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University

More information

Job Scheduling with Moab Cluster Suite

Job Scheduling with Moab Cluster Suite Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. [email protected] 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..

More information

Overview of HPC Resources at Vanderbilt

Overview of HPC Resources at Vanderbilt Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Introduction to HPC Workshop. Center for e-research ([email protected])

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research ([email protected]) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: [email protected] Read documentation at: http://research.computing.yale.edu/hpc-support

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative

More information

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:

More information

ABAQUS High Performance Computing Environment at Nokia

ABAQUS High Performance Computing Environment at Nokia ABAQUS High Performance Computing Environment at Nokia Juha M. Korpela Nokia Corporation Abstract: The new commodity high performance computing (HPC) hardware together with the recent ABAQUS performance

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center Remote & Collaborative Visualization Texas Advanced Compu1ng Center So6ware Requirements SSH client VNC client Recommended: TigerVNC http://sourceforge.net/projects/tigervnc/files/ Web browser with Java

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Summary. Load and Open GaussView to start example

Summary. Load and Open GaussView to start example Summary This document describes in great detail how to navigate the Linux Red Hat Terminal to bring up GaussView, use GaussView to create a simple atomic or molecular simulation input file, and then use

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

automates system administration for homogeneous and heterogeneous networks

automates system administration for homogeneous and heterogeneous networks IT SERVICES SOLUTIONS SOFTWARE IT Services CONSULTING Operational Concepts Security Solutions Linux Cluster Computing automates system administration for homogeneous and heterogeneous networks System Management

More information

A Crash course to (The) Bighouse

A Crash course to (The) Bighouse A Crash course to (The) Bighouse Brock Palen [email protected] SVTI Users meeting Sep 20th Outline 1 Resources Configuration Hardware 2 Architecture ccnuma Altix 4700 Brick 3 Software Packaged Software

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing

More information

Parallel Computing with Mathematica UVACSE Short Course

Parallel Computing with Mathematica UVACSE Short Course UVACSE Short Course E Hall 1 1 University of Virginia Alliance for Computational Science and Engineering [email protected] October 8, 2014 (UVACSE) October 8, 2014 1 / 46 Outline 1 NX Client for Remote

More information

Parallel Computing. Introduction

Parallel Computing. Introduction Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00

More information

Building Clusters for Gromacs and other HPC applications

Building Clusters for Gromacs and other HPC applications Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network

More information

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics

More information

IMPLEMENTING GREEN IT

IMPLEMENTING GREEN IT Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK

More information

Chapter 6. Inside the System Unit. What You Will Learn... Computers Are Your Future. What You Will Learn... Describing Hardware Performance

Chapter 6. Inside the System Unit. What You Will Learn... Computers Are Your Future. What You Will Learn... Describing Hardware Performance What You Will Learn... Computers Are Your Future Chapter 6 Understand how computers represent data Understand the measurements used to describe data transfer rates and data storage capacity List the components

More information

Grid 101. Grid 101. Josh Hegie. [email protected] http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie [email protected] http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

Chapter 4 System Unit Components. Discovering Computers 2012. Your Interactive Guide to the Digital World

Chapter 4 System Unit Components. Discovering Computers 2012. Your Interactive Guide to the Digital World Chapter 4 System Unit Components Discovering Computers 2012 Your Interactive Guide to the Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook

More information

Introduction to Big Data Analysis for Scientists and Engineers

Introduction to Big Data Analysis for Scientists and Engineers Introduction to Big Data Analysis for Scientists and Engineers About this white paper: This paper was written by David C. Young, an employee of CSC. It was written as supplemental documentation for use

More information

Beyond Windows: Using the Linux Servers and the Grid

Beyond Windows: Using the Linux Servers and the Grid Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux

More information

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

Discovering Computers 2011. Living in a Digital World

Discovering Computers 2011. Living in a Digital World Discovering Computers 2011 Living in a Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook computers, and mobile devices Identify chips,

More information

Alabama Supercomputer Authority

Alabama Supercomputer Authority Alabama Supercomputer Authority Alabama Supercomputer Authority HPC USER MANUAL Edition 10.1 HPC User Manual Alabama Supercomputer Authority 686 Discovery Drive Huntsville, AL 35806 Alabama Supercomputer

More information

Recommended hardware system configurations for ANSYS users

Recommended hardware system configurations for ANSYS users Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Using Parallel Computing to Run Multiple Jobs

Using Parallel Computing to Run Multiple Jobs Beowulf Training Using Parallel Computing to Run Multiple Jobs Jeff Linderoth August 5, 2003 August 5, 2003 Beowulf Training Running Multiple Jobs Slide 1 Outline Introduction to Scheduling Software The

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Dell Fabric Manager Installation Guide 1.0.0

Dell Fabric Manager Installation Guide 1.0.0 Dell Fabric Manager Installation Guide 1.0.0 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either

More information

Installation Guide. (Version 2014.1) Midland Valley Exploration Ltd 144 West George Street Glasgow G2 2HG United Kingdom

Installation Guide. (Version 2014.1) Midland Valley Exploration Ltd 144 West George Street Glasgow G2 2HG United Kingdom Installation Guide (Version 2014.1) Midland Valley Exploration Ltd 144 West George Street Glasgow G2 2HG United Kingdom Tel: +44 (0) 141 3322681 Fax: +44 (0) 141 3326792 www.mve.com Table of Contents 1.

More information

IBM License Metric Tool Version 7.2.2. Installing with embedded WebSphere Application Server

IBM License Metric Tool Version 7.2.2. Installing with embedded WebSphere Application Server IBM License Metric Tool Version 7.2.2 Installing with embedded WebSphere Application Server IBM License Metric Tool Version 7.2.2 Installing with embedded WebSphere Application Server Installation Guide

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth [email protected] Peter Ruprecht [email protected] www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014 HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

PROGRAMMING FOR BIOLOGISTS. BIOL 6297 Monday, Wednesday 10 am -12 pm

PROGRAMMING FOR BIOLOGISTS. BIOL 6297 Monday, Wednesday 10 am -12 pm PROGRAMMING FOR BIOLOGISTS BIOL 6297 Monday, Wednesday 10 am -12 pm Tomorrow is Ada Lovelace Day Ada Lovelace was the first person to write a computer program Today s Lecture Overview of the course Philosophy

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Overview of HPC systems and software available within

Overview of HPC systems and software available within Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster

More information

Trends in High-Performance Computing for Power Grid Applications

Trends in High-Performance Computing for Power Grid Applications Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views

More information

LSKA 2010 Survey Report Job Scheduler

LSKA 2010 Survey Report Job Scheduler LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,

More information

MFCF Grad Session 2015

MFCF Grad Session 2015 MFCF Grad Session 2015 Agenda Introduction Help Centre and requests Dept. Grad reps Linux clusters using R with MPI Remote applications Future computing direction Technical question and answer period MFCF

More information

Unix Guide. Logo Reproduction. School of Computing & Information Systems. Colours red and black on white backgroun

Unix Guide. Logo Reproduction. School of Computing & Information Systems. Colours red and black on white backgroun Logo Reproduction Colours red and black on white backgroun School of Computing & Information Systems Unix Guide Mono positive black on white background 2013 Mono negative white only out of any colou 2

More information

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2 CNAG User s Guide Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015 Contents 1 Introduction 2 2 System Overview 2 3 Connecting to CNAG cluster 2 3.1 Transferring files...................................

More information

Cloud Performance Benchmark Series

Cloud Performance Benchmark Series Cloud Performance Benchmark Series Amazon EC2 CPU Speed Benchmarks Kalpit Sarda Sumit Sanghrajka Radu Sion ver..7 C l o u d B e n c h m a r k s : C o m p u t i n g o n A m a z o n E C 2 2 1. Overview We

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform

Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4

More information

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the

More information

How To Run A Steady Case On A Creeper

How To Run A Steady Case On A Creeper Crash Course Introduction to OpenFOAM Artur Lidtke University of Southampton [email protected] November 4, 2014 Artur Lidtke Crash Course Introduction to OpenFOAM 1 / 32 What is OpenFOAM? Using OpenFOAM

More information

Accelerating CFD using OpenFOAM with GPUs

Accelerating CFD using OpenFOAM with GPUs Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

PRIMERGY server-based High Performance Computing solutions

PRIMERGY server-based High Performance Computing solutions PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating

More information

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware

More information

Parallel Large-Scale Visualization

Parallel Large-Scale Visualization Parallel Large-Scale Visualization Aaron Birkland Cornell Center for Advanced Computing Data Analysis on Ranger January 2012 Parallel Visualization Why? Performance Processing may be too slow on one CPU

More information

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance

More information

Q-Chem: Quantum Chemistry Software for Large Systems. Peter M.W. Gill. Q-Chem, Inc. Four Triangle Drive Export, PA 15632, USA. and

Q-Chem: Quantum Chemistry Software for Large Systems. Peter M.W. Gill. Q-Chem, Inc. Four Triangle Drive Export, PA 15632, USA. and Q-Chem: Quantum Chemistry Software for Large Systems Peter M.W. Gill Q-Chem, Inc. Four Triangle Drive Export, PA 15632, USA and Department of Chemistry University of Cambridge Cambridge, CB2 1EW, England

More information

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 (

How To Run A Tompouce Cluster On An Ipra (Inria) 2.5.5 (Sun) 2 (Sun Geserade) 2-5.4 (Sun-Ge) 2/5.2 ( Running Hadoop and Stratosphere jobs on TomPouce cluster 16 October 2013 TomPouce cluster TomPouce is a cluster of 20 calcula@on nodes = 240 cores Located in the Inria Turing building (École Polytechnique)

More information

Imaging Computing Server User Guide

Imaging Computing Server User Guide Imaging Computing Server User Guide PerkinElmer, Viscount Centre II, University of Warwick Science Park, Millburn Hill Road, Coventry, CV4 7HS T +44 (0) 24 7669 2229 F +44 (0) 24 7669 0091 E [email protected]

More information

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University Working with HPC and HTC Apps Abhinav Thota Research Technologies Indiana University Outline What are HPC apps? Working with typical HPC apps Compilers - Optimizations and libraries Installation Modules

More information

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit. Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how

More information