Week 1 out-of-class notes, discussions and sample problems

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Week 1 out-of-class notes, discussions and sample problems"

Transcription

1 Week 1 out-of-class notes, discussions and sample problems Although we will primarily concentrate on RISC processors as found in some desktop/laptop computers, here we take a look at the varying types of processors. Handheld/mobile devices: we need powerful processors that are energy efficient (due to battery restrictions), low heat producing (due to the lack of a fan), yet offer real-time performance and graphics processing. The ARM family of processors are the most common, based on the Acorn RISC Machine, first introduced in the early 80s. In the late 80s, Apple worked with Acorn to begin releasing new ARM processor cores. It is these that most current ARM processors are based on. The ARM family generally is denoted by the following features: Load/store instruction set bit registers where some of the registers are only used by the OS Fixed-length 32-bit instructions Single clock-cycle execution for most instructions Conditional execution rather than branch prediction Condition codes used only if specified Indexed addressing modes The early ARM processors used a 3-stage pipeline but were later expanded to as many as 13 stages. Branch prediction was added to later versions to improve over conditional execution. We will talk about conditional execution later in the semester. Later versions also implemented the Thumb instruction set. This instruction set consists of 16-bit instructions. This allows two instructions to be fetched and possibly executed. Thumb instructions can be placed inside of ordinary code. The idea behind ARM is to have a scaled back ISA so that the processors can squeeze a good deal of parallelism out of code. Since most handheld devices are running only one or a few apps at a time, there is less need for large memories, the fastest clock speeds or the power found in larger computers. This keeps power consumption and heat production down, and keeps the cost down. Desktop/laptop computers: for these devices, we need to manage the tradeoff between price and performance. Obviously users want better performance but are only winning to spend between $300 and $2500 on a desktop/laptop unit. The largest requirements are to support modest multitasking (e.g., up to 10 processes at a time), graphics and other forms of multimedia, Internet communication and common forms of productivity software as well as the luxury of running a complex operating system that handles user duties with little interaction. Memory requirements are somewhat lofty because users will multitask and because the Windows and Mac operating systems are large. This not only requires 4G-8G of RAM but also as many as 3 levels of cache, organized in such a way that cache performance does not negatively impact the processor. Additionally, the modern processors give off a good amount of heat, so cooling fans must be available. The most common PC processors today are the latest generations of Intel Pentium, Xeon, Celeron and now Core processors. AMD is currently one of the few competitors in the PC market offering the FX, Phenom II and Athlon II processors. Servers: introduced in the 1980s to serve as file servers, servers are now more generically titled and can range in usage from simple file servers (often found in LANs, or used as web or database servers on the Internet) but can also be used to service distributed processing for ATM machines, airline reservations and on-line services (e.g., amazon web site, google search engine). In the latter case, the authors refer to this as a cluster or warehouse-scale computer. Cloud computing also fits in this category. Higher end servers reach supercomputer status. Costs for a server ranges from $5K through $10M and up to $200M for a cluster. The most important aspect of this model of computer is throughput the number of services handled per unit of time. Throughput is impacted as much by memory capacity and telecommunications as it is processor capability. Scalability is another important feature, primarily impacted by how easy it is

2 to add memory and hard disk space to the computer(s). The server/cluster end has largely replaced mainframe computers of old. There are a wide range of processors that are used by servers, but the more significant performance increase occurs with multiprocessing rather than improvements made to a single processor as we see in the PC market. Embedded computers: at the other extreme from clusters is the embedded computer, a processor embedded in another device (e.g., microwave oven, car engine). These devices are often 8-bit or 16-bit processors with minimum storage and modest power requirements. They often can cost less than $5 and seldom cost more than $100. What we want to cover in this class are the common set of processor improvements that we see standard in most processors, no matter which platform they are intended for. The primary tool to processor improvement is parallelism. There are many forms of parallelism that the authors divide between datalevel and task-level. We implement these using instruction-level parallelism through pipelining and speculative execution, vector-level using an SIMD-style architecture, thread-level and request-level (not covered in this course). The main two efforts to achieve parallelism are through the processor and through the cache. In the processor, we use pipelining and multiple functional units so that one or more instruction can be issued each clock cycle. Due to the complexity of modern processors, instructions may finish execution out of order, and therefore we need additional hardware to re-order the instructions upon completion. In cache, we want to ensure as few cache misses as possible so that the instruction issue stage of the pipeline does not stall, nor does an instruction waiting on memory. So, principle of locality of reference is applied. We will visit many other cache improvements later in the semester. Above all, we focus on the common case. As we saw in class, Amdahl s Law shows us that no matter what level of speedup we might achieve through some improvement, it is the common case that will win out. Consider, for instance, an improvement that can be used 80% of the time that increases performance by 50% versus an improvement that can be used 25% of the time that increases performance by a factor of 10 (1000%). Improvement 1: 1 / ( / 1.5) = 1.36 (36% speedup) Improvement 2: 1 / ( / 10) = 1.29 (29% speedup) Later in the semester, we will look at the initial x86 pipeline and see how the CISC features of x86 complicated the pipeline to the point of poor performance. We cover the MIPS instruction set because it is a model instruction set to aim for. That is, it was designed specifically to promote an efficient pipeline. MIPS was originally developed in the early 80s. Because of this, it lacks some features that we now want present to help support further parallelism. For instance, there is no vector processing instructions in MIPS. We will briefly visit this later in the semester. Neither are there graphics processing instructions (we will not examine this although it is in the textbook). As covered in class, the typical MIPS processor is a 5-stage fetch-execute cycle. Next week, in the out-of-class notes, you will compare it to the MIPS R4000, an 8-stage fetch-execute cycle. We wrap up the notes for the out-of-class portion by looking at several example problems. Also visit the discussion board. 1. It seems that a quad core processor should speed up a computer by a factor of 4 but it doesn t. Use Amdahl s Law to compute the percentage of program execution that should be distributed to achieve an overall speedup of 3. Of 2. Of 1.5. Of Answer: We want to solve for x in y = (1 / (1 x + x / 4)) where y is 3, 2, 1.5 and This involves a little algebra but we wind up with x = 4 / 3 * (1 1 / y). For y = 3, x =.889. For y = 2, x =.667. For y = 1.5, x =.444. For y = 1.25, x =.267. So to achieve a speedup of 1.25, all

3 four cores must be in use about 26.7% of the time but to achieve a 3 time speedup, all four cores must be in use 88.9% of the time. 2. Let s compare a CISC machine versus a RISC machine on a benchmark. Assume the following characteristics of the two machines. CISC: CPI of 4 for load/store, 3 for ALU/branch and 10 for call/return, CPU clock rate of 2.75 GHz RISC: CPI of 1.4 (the machine is pipelined, the ideal CPI is 1.0, but overhead and stalls make it 1.4) and a CPU clock rate of 2 GHz Since the CISC machine has more complex instructions, the IC for the CISC machine is 40% smaller than the IC for the RISC machine The benchmark has a breakdown of 38% loads, 10% stores, 35% ALU operations, 3% calls, 3% returns and 11% branches. Which machine will run the benchmark in less time and by how much? Answer: use CPU time = IC * CPI * Clock cycle time RISC: IC RISC * CPI RISC * Clock cycle time RISC = IC RISC * 1.4 * 1 / 2GHz = 0.7 * IC RISC CISC: IC CISC * CPI CISC * Clock cycle time CISC = IC RISC * 0.6 * (4 * * * * * *.11) * 1 / 2.75 GHz = IC RISC * 0.6 * 3.9 / 2.75 GHz = * IC RISC Since the CISC machine has a higher CPU time, it means that the RISC machine is faster by / 0.7 = or about 22%. 3. The MIPS instruction set passes parameters through memory, thus slowing down function calls. An alternate architecture, Berkeley RISC, uses register windows. The register windows places local variables of a function into a set of registers. Those being passed as parameters to another function are placed into another set of registers which overlap registers available for the function. Thus, the window is overlapping registers. See the figure below. Let s assume that using

4 register windows cause memory accesses to be replaced by register operations, so rather than accruing the CPI of a load or store for each parameter, each parameter accrues the CPI of an ALU operation. Assume we have the following CPI breakdown: Loads/stores: 4, ALU and unconditional branches: 2, conditional branches: 3, procedure calls and returns: 15. Architects are trying to decide whether to use additional registers in a CPU for register windows or just more registers in the register file. If we go with ordinary registers, it reduces the number of loads and stores by 40% and 30% respectively (because we can put more into registers). If we go with register windows, it reduces the procedure call/return CPI greatly. Let s assume the CPI or a procedure call reduces to 4.5 and the return reduces to 3. Which should we use for a benchmark of: 40% loads, 13% stores, 31% ALU, 8% conditional branch, 2% unconditional branch, 3% procedure calls, 3% returns? Answer: CPU Time = IC * CPI * Clock Cycle Time. The last value will not change between the two approaches. If we use register windows, CPI reduces and if we add more registers, IC reduces because of fewer loads & stores. CPI original =.40 * * * * * * * 15 = CPI regwindows =.40 * * * * * * * 3 = We have to figure out the new breakdown of instructions if we have fewer loads and stores:.40 *.40 =.16, so 16% fewer loads.13 *.30 =.039 so 3.9% fewer stores So there will be =.199 fewer instructions, we now recomputed the breakdown of instructions given an IC of =.801 Loads = ( ) /.801 =.300 Stores = ( ) /.801 =.114 ALU =.31 /.801 =.387 Conditional branches =.08 /.801 =.100 Unconditional branches =.02 /.801 =.025 Procedure calls =.03 /.801 =.037 Returns =.03 /.801 =.037 New CPI =.300 * * * * * * * 15 = 3.89 IC registers =.801 * ICoriginal CPU Time register windows = IC original * * CPU clock cycle time = * IC original * clock cycle time CPU Time new registers = IC original *.801 * 3.89 * CPU clock cycle time = * IC original * clock cycle time The version using the registers as actual registers is faster, so the speedup of using the registers as actual registers instead of as register windows is / = or a little over 4% faster. 4. In the 1980s and 1990s, architects debated whether the RISC or CISC approach was better. The list below denotes some of the differences in philosophy between the two forms of architecture. For each of the following, explain how it would improve CPU time in terms of which of the following in our CPU time formula would be decreased: IC, CPI, Clock Cycle Time, or some combination. NOTE: some of these may increase but you do not need to discuss what increases, only what decreases. a. In RISC, there are a great number of registers available, less so in a CISC machine

5 b. In CISC, there can be complex addressing modes such as indirect addressing to obtain the datum pointed to by a pointer c. In RISC, a pipeline is used to perform each part of the fetch-execute cycle as an independent stage d. In CISC, variable sized instruction lengths are common so that multiple memory operands can be accessed at the same time Answers: a. With more registers, there is less need for loads and stores, so IC decreases. However, since CISC machines often have memory-register operations (such as add x, y, z), the actual impact is most felt in CPI because the add instruction in a RISC machine will have a low CPI since operands must be stored in registers, whereas the CISC add instruction will have a much longer CPI if it involves accessing memory one or more times per instruction. b. The complex addressing modes allow memory accesses in single operations whereas in a RISC architecture without complex addressing modes, something like indirect addressing takes multiple operations, therefore this feature lowers IC. c. Since all operations are pipelined, their CPI is reduced to approximately 1, therefore the pipeline lowers CPI. d. The variable sized instruction length allows instructions to carry out multiple tasks, and therefore there needs to be fewer instructions, lowering IC. 5. Let s see what might happen if we add a register-memory ALU mode to MIPS. We could replace the two statements LW R1, 0(R2) DADDU R3, R3, R1 With DADDU R3, 0(R2) So that the new instruction has enough space in the 32-bit instruction length format, we restrict this to be a two operand instruction where the first operand is a source and a destination register. Assume that to accommodate the memory fetch as part of this instruction, we increase clock cycle time by 15%. Using the gcc benchmark (see figure A.27, p. A-41), what percentage of loads would have to be eliminated so that this new mode can execute gcc in the same amount of time? Answer: We want CPU time old = CPU time new where CPU time = IC * CPI * Clock Cycle Time. We will assume that CPI will not change and we know Clock Cycle Time new is 15% longer than Clock Cycle Time old. So, to balance out, IC new must be 15% less than IC old or we have to reduce IC to be 85% of the old. Since loads make up 25.1% of the total, we have to reduce loads to be 15% / 25.1% =.60%, or we have to eliminate 60% of the loads. 6. The autoincrement and autodecrement mode are common in CISC computers. This mode is used when accessing an array by automatically incrementing or decrementing the register storing the offset. The change occurs after the access for the increment, and before the access for the decrement. Let s see what happens in some standard array code with the new mode: for(i=0;i<1000;i++) a[i]=b[i]+c[i]; Assume that R1, R2, and R3 store the starting addresses arrays a, b, c respectively and that they are all int arrays. If we introduce an autoincrement statement like LWI Rx, 0(Ry) in place of the LW instruction of MIPS, how will it impact the performance? Below are the two sets of code, without and with the autoincrements. The CPI for our machine is as follows: 5 for loads/stores, 2

6 for ALU and 3 for branches. The autoincrement load/store also has a CPI of 5 but requires that we lengthen the clock cycle by 25%. Is the new mode worth pursuing? DADD R4, R0, R0 // R4 is the loop variable i DADDI R5, R0, #1000 // R5 = 1000 top: DSUB R6, R5, R4 BEQZ R6, out // exit for loop after 1000 iterations LW R7, 0(R2) // R7 = b[i] LW R8, 0(R3) // R8 = c[i] DADD R9, R7, R8 // R9 = b[i] + c[i] SW R9, 0(R1) DADDI R1, R1, #4 DADDI R2, R2, #4 DADDI R3, R3, #4 DADDI R4, R4, #1 J top out:... DADD R4, R0, R0 // R4 is the loop variable i DADDI R5, R0, #1000 // R5 = 1000 top: DSUB R6, R5, R4 BEQZ R6, out // exit for loop after 1000 iterations LWI R7, 0(R2) // R7 = b[i] LWI R8, 0(R3) // R8 = c[i] DADD R9, R7, R8 // R9 = b[i] + c[i] SWI R9, 0(R1) DADDI R4, R4, #1 J top out:... Answer: We compare the two CPU Times. CPU Time = IC * CPI * Clock Cycle Time. The original machine has a shorter Clock Cycle Time while the newer machine has a reduced IC * CPI because we can remove three of the DADDI instructions. CPU Time original = IC * CPI * Clock Cycle Time original CPU Time new = IC * CPI * Clock Cycle Time new We compute IC * CPI as follows: The original code has 2 ALU operations outside of the loop plus a loop of 6 ALU, 2 branch and 3 load/store. This gives us a total IC * CPI = 2 * * (6 * * * 5) = 33,004 clock cycles. The new code has 2 ALU operations outside of the loop plus a loop of 3 ALU, 2 branch and 3 load/store increment. This gives us a total of IC * CPI = 2 * * (3 * * * 5) = 27,004. Clock Cycle Time new = Clock Cycle Time old * 1.25 CPU Time old = 33,004 * Clock Cycle Time old

7 CPU Time new = 27,004 * Clock Cycle Time new = 27,004 * Clock Cycle Time old * 1.25 Speedup = CPU Time old / CPU Time new = 33,004 / (27,004 * 1.25) = 0.978, so we see a slowdown, not a speedup. 7. As an alternative to #6, let s assume that the clock speed does not change, but that the CPI for the LWI and SWI is 6. Is the change worth it? Answer: Here, Clock cycle time does not change so we only have to compare IC * CPI for both machines. The old machine s IC * CPI does not change. The new machine has the following IC * CPI = 2 * * (3 * * * 6) = 30,004. Since this is a reduction, the new mode would be worth it in this case. The speedup is 33,004 / 30,004 = 1.10.

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek Instruction Set Architecture or How to talk to computers if you aren t in Star Trek The Instruction Set Architecture Application Compiler Instr. Set Proc. Operating System I/O system Instruction Set Architecture

More information

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX Overview CISC Developments Over Twenty Years Classic CISC design: Digital VAX VAXÕs RISC successor: PRISM/Alpha IntelÕs ubiquitous 80x86 architecture Ð 8086 through the Pentium Pro (P6) RJS 2/3/97 Philosophy

More information

LSN 2 Computer Processors

LSN 2 Computer Processors LSN 2 Computer Processors Department of Engineering Technology LSN 2 Computer Processors Microprocessors Design Instruction set Processor organization Processor performance Bandwidth Clock speed LSN 2

More information

on an system with an infinite number of processors. Calculate the speedup of

on an system with an infinite number of processors. Calculate the speedup of 1. Amdahl s law Three enhancements with the following speedups are proposed for a new architecture: Speedup1 = 30 Speedup2 = 20 Speedup3 = 10 Only one enhancement is usable at a time. a) If enhancements

More information

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2 Lecture Handout Computer Architecture Lecture No. 2 Reading Material Vincent P. Heuring&Harry F. Jordan Chapter 2,Chapter3 Computer Systems Design and Architecture 2.1, 2.2, 3.2 Summary 1) A taxonomy of

More information

Instruction Set Design

Instruction Set Design Instruction Set Design Instruction Set Architecture: to what purpose? ISA provides the level of abstraction between the software and the hardware One of the most important abstraction in CS It s narrow,

More information

Performance evaluation

Performance evaluation Performance evaluation Arquitecturas Avanzadas de Computadores - 2547021 Departamento de Ingeniería Electrónica y de Telecomunicaciones Facultad de Ingeniería 2015-1 Bibliography and evaluation Bibliography

More information

Processor Architectures

Processor Architectures ECPE 170 Jeff Shafer University of the Pacific Processor Architectures 2 Schedule Exam 3 Tuesday, December 6 th Caches Virtual Memory Input / Output OperaKng Systems Compilers & Assemblers Processor Architecture

More information

Pipelining Review and Its Limitations

Pipelining Review and Its Limitations Pipelining Review and Its Limitations Yuri Baida yuri.baida@gmail.com yuriy.v.baida@intel.com October 16, 2010 Moscow Institute of Physics and Technology Agenda Review Instruction set architecture Basic

More information

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit

More information

Instruction Set Architecture (ISA) Design. Classification Categories

Instruction Set Architecture (ISA) Design. Classification Categories Instruction Set Architecture (ISA) Design Overview» Classify Instruction set architectures» Look at how applications use ISAs» Examine a modern RISC ISA (DLX)» Measurement of ISA usage in real computers

More information

PROBLEMS. which was discussed in Section 1.6.3.

PROBLEMS. which was discussed in Section 1.6.3. 22 CHAPTER 1 BASIC STRUCTURE OF COMPUTERS (Corrisponde al cap. 1 - Introduzione al calcolatore) PROBLEMS 1.1 List the steps needed to execute the machine instruction LOCA,R0 in terms of transfers between

More information

Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1

Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1 Pipeline Hazards Structure hazard Data hazard Pipeline hazard: the major hurdle A hazard is a condition that prevents an instruction in the pipe from executing its next scheduled pipe stage Taxonomy of

More information

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip. Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide

More information

Computer Organization and Architecture

Computer Organization and Architecture Computer Organization and Architecture Chapter 11 Instruction Sets: Addressing Modes and Formats Instruction Set Design One goal of instruction set design is to minimize instruction length Another goal

More information

More on Pipelining and Pipelines in Real Machines CS 333 Fall 2006 Main Ideas Data Hazards RAW WAR WAW More pipeline stall reduction techniques Branch prediction» static» dynamic bimodal branch prediction

More information

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit. Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how

More information

Intel 8086 architecture

Intel 8086 architecture Intel 8086 architecture Today we ll take a look at Intel s 8086, which is one of the oldest and yet most prevalent processor architectures around. We ll make many comparisons between the MIPS and 8086

More information

Central Processing Unit (CPU)

Central Processing Unit (CPU) Central Processing Unit (CPU) CPU is the heart and brain It interprets and executes machine level instructions Controls data transfer from/to Main Memory (MM) and CPU Detects any errors In the following

More information

ARM Architecture. ARM history. Why ARM? ARM Ltd. 1983 developed by Acorn computers. Computer Organization and Assembly Languages Yung-Yu Chuang

ARM Architecture. ARM history. Why ARM? ARM Ltd. 1983 developed by Acorn computers. Computer Organization and Assembly Languages Yung-Yu Chuang ARM history ARM Architecture Computer Organization and Assembly Languages g Yung-Yu Chuang 1983 developed by Acorn computers To replace 6502 in BBC computers 4-man VLSI design team Its simplicity it comes

More information

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING 2013/2014 1 st Semester Sample Exam January 2014 Duration: 2h00 - No extra material allowed. This includes notes, scratch paper, calculator, etc.

More information

CPU Organization and Assembly Language

CPU Organization and Assembly Language COS 140 Foundations of Computer Science School of Computing and Information Science University of Maine October 2, 2015 Outline 1 2 3 4 5 6 7 8 Homework and announcements Reading: Chapter 12 Homework:

More information

Instruction Set Architecture

Instruction Set Architecture Instruction Set Architecture Consider x := y+z. (x, y, z are memory variables) 1-address instructions 2-address instructions LOAD y (r :=y) ADD y,z (y := y+z) ADD z (r:=r+z) MOVE x,y (x := y) STORE x (x:=r)

More information

Q. Consider a dynamic instruction execution (an execution trace, in other words) that consists of repeats of code in this pattern:

Q. Consider a dynamic instruction execution (an execution trace, in other words) that consists of repeats of code in this pattern: Pipelining HW Q. Can a MIPS SW instruction executing in a simple 5-stage pipelined implementation have a data dependency hazard of any type resulting in a nop bubble? If so, show an example; if not, prove

More information

Generations of the computer. processors.

Generations of the computer. processors. . Piotr Gwizdała 1 Contents 1 st Generation 2 nd Generation 3 rd Generation 4 th Generation 5 th Generation 6 th Generation 7 th Generation 8 th Generation Dual Core generation Improves and actualizations

More information

Instruction Set Architecture (ISA)

Instruction Set Architecture (ISA) Instruction Set Architecture (ISA) * Instruction set architecture of a machine fills the semantic gap between the user and the machine. * ISA serves as the starting point for the design of a new machine

More information

The Motherboard Chapter #5

The Motherboard Chapter #5 The Motherboard Chapter #5 Amy Hissom Key Terms Advanced Transfer Cache (ATC) A type of L2 cache contained within the Pentium processor housing that is embedded on the same core processor die as the CPU

More information

Quiz for Chapter 1 Computer Abstractions and Technology 3.10

Quiz for Chapter 1 Computer Abstractions and Technology 3.10 Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [15 points] Consider two different implementations,

More information

Computer Architectures

Computer Architectures Computer Architectures 2. Instruction Set Architectures 2015. február 12. Budapest Gábor Horváth associate professor BUTE Dept. of Networked Systems and Services ghorvath@hit.bme.hu 2 Instruction set architectures

More information

Chapter 5 Instructor's Manual

Chapter 5 Instructor's Manual The Essentials of Computer Organization and Architecture Linda Null and Julia Lobur Jones and Bartlett Publishers, 2003 Chapter 5 Instructor's Manual Chapter Objectives Chapter 5, A Closer Look at Instruction

More information

VLIW Processors. VLIW Processors

VLIW Processors. VLIW Processors 1 VLIW Processors VLIW ( very long instruction word ) processors instructions are scheduled by the compiler a fixed number of operations are formatted as one big instruction (called a bundle) usually LIW

More information

Unit 4: Performance & Benchmarking. Performance Metrics. This Unit. CIS 501: Computer Architecture. Performance: Latency vs.

Unit 4: Performance & Benchmarking. Performance Metrics. This Unit. CIS 501: Computer Architecture. Performance: Latency vs. This Unit CIS 501: Computer Architecture Unit 4: Performance & Benchmarking Metrics Latency and throughput Speedup Averaging CPU Performance Performance Pitfalls Slides'developed'by'Milo'Mar0n'&'Amir'Roth'at'the'University'of'Pennsylvania'

More information

Execution Cycle. Pipelining. IF and ID Stages. Simple MIPS Instruction Formats

Execution Cycle. Pipelining. IF and ID Stages. Simple MIPS Instruction Formats Execution Cycle Pipelining CSE 410, Spring 2005 Computer Systems http://www.cs.washington.edu/410 1. Instruction Fetch 2. Instruction Decode 3. Execute 4. Memory 5. Write Back IF and ID Stages 1. Instruction

More information

EE282 Computer Architecture and Organization Midterm Exam February 13, 2001. (Total Time = 120 minutes, Total Points = 100)

EE282 Computer Architecture and Organization Midterm Exam February 13, 2001. (Total Time = 120 minutes, Total Points = 100) EE282 Computer Architecture and Organization Midterm Exam February 13, 2001 (Total Time = 120 minutes, Total Points = 100) Name: (please print) Wolfe - Solution In recognition of and in the spirit of the

More information

The Central Processing Unit:

The Central Processing Unit: The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Objectives Identify the components of the central processing unit and how they work together and interact with memory Describe how

More information

CISC, RISC, and DSP Microprocessors

CISC, RISC, and DSP Microprocessors CISC, RISC, and DSP Microprocessors Douglas L. Jones ECE 497 Spring 2000 4/6/00 CISC, RISC, and DSP D.L. Jones 1 Outline Microprocessors circa 1984 RISC vs. CISC Microprocessors circa 1999 Perspective:

More information

PRIMERGY server-based High Performance Computing solutions

PRIMERGY server-based High Performance Computing solutions PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating

More information

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Chapter 2 Basic Structure of Computers Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline Functional Units Basic Operational Concepts Bus Structures Software

More information

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu. Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.tw Review Computers in mid 50 s Hardware was expensive

More information

Course on Advanced Computer Architectures

Course on Advanced Computer Architectures Course on Advanced Computer Architectures Surname (Cognome) Name (Nome) POLIMI ID Number Signature (Firma) SOLUTION Politecnico di Milano, September 3rd, 2015 Prof. C. Silvano EX1A ( 2 points) EX1B ( 2

More information

ADVANCED COMPUTER ARCHITECTURE

ADVANCED COMPUTER ARCHITECTURE ADVANCED COMPUTER ARCHITECTURE Marco Ferretti Tel. Ufficio: 0382 985365 E-mail: marco.ferretti@unipv.it Web: www.unipv.it/mferretti, eecs.unipv.it 1 Course syllabus and motivations This course covers the

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic

More information

Design Cycle for Microprocessors

Design Cycle for Microprocessors Cycle for Microprocessors Raúl Martínez Intel Barcelona Research Center Cursos de Verano 2010 UCLM Intel Corporation, 2010 Agenda Introduction plan Architecture Microarchitecture Logic Silicon ramp Types

More information

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27 Logistics Week 1: Wednesday, Jan 27 Because of overcrowding, we will be changing to a new room on Monday (Snee 1120). Accounts on the class cluster (crocus.csuglab.cornell.edu) will be available next week.

More information

An Introduction to the ARM 7 Architecture

An Introduction to the ARM 7 Architecture An Introduction to the ARM 7 Architecture Trevor Martin CEng, MIEE Technical Director This article gives an overview of the ARM 7 architecture and a description of its major features for a developer new

More information

INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER

INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER Course on: Advanced Computer Architectures INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER Prof. Cristina Silvano Politecnico di Milano cristina.silvano@polimi.it Prof. Silvano, Politecnico di Milano

More information

In the Beginning... 1964 -- The first ISA appears on the IBM System 360 In the good old days

In the Beginning... 1964 -- The first ISA appears on the IBM System 360 In the good old days RISC vs CISC 66 In the Beginning... 1964 -- The first ISA appears on the IBM System 360 In the good old days Initially, the focus was on usability by humans. Lots of user-friendly instructions (remember

More information

CHAPTER 7: The CPU and Memory

CHAPTER 7: The CPU and Memory CHAPTER 7: The CPU and Memory The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides

More information

ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture

ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture ELE 356 Computer Engineering II Section 1 Foundations Class 6 Architecture History ENIAC Video 2 tj History Mechanical Devices Abacus 3 tj History Mechanical Devices The Antikythera Mechanism Oldest known

More information

Introducción. Diseño de sistemas digitales.1

Introducción. Diseño de sistemas digitales.1 Introducción Adapted from: Mary Jane Irwin ( www.cse.psu.edu/~mji ) www.cse.psu.edu/~cg431 [Original from Computer Organization and Design, Patterson & Hennessy, 2005, UCB] Diseño de sistemas digitales.1

More information

PROBLEMS #20,R0,R1 #$3A,R2,R4

PROBLEMS #20,R0,R1 #$3A,R2,R4 506 CHAPTER 8 PIPELINING (Corrisponde al cap. 11 - Introduzione al pipelining) PROBLEMS 8.1 Consider the following sequence of instructions Mul And #20,R0,R1 #3,R2,R3 #$3A,R2,R4 R0,R2,R5 In all instructions,

More information

Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to:

Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to: 55 Topic 3 Computer Performance Contents 3.1 Introduction...................................... 56 3.2 Measuring performance............................... 56 3.2.1 Clock Speed.................................

More information

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? Inside the CPU how does the CPU work? what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? some short, boring programs to illustrate the

More information

Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007

Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007 Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer

More information

A Lab Course on Computer Architecture

A Lab Course on Computer Architecture A Lab Course on Computer Architecture Pedro López José Duato Depto. de Informática de Sistemas y Computadores Facultad de Informática Universidad Politécnica de Valencia Camino de Vera s/n, 46071 - Valencia,

More information

Lecture 3: Evaluating Computer Architectures. Software & Hardware: The Virtuous Cycle?

Lecture 3: Evaluating Computer Architectures. Software & Hardware: The Virtuous Cycle? Lecture 3: Evaluating Computer Architectures Announcements - Reminder: Homework 1 due Thursday 2/2 Last Time technology back ground Computer elements Circuits and timing Virtuous cycle of the past and

More information

Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools

Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools Terminal Server Software and Hardware Requirements Datacolor Match Pigment Datacolor Tools January 21, 2011 Page 1 of 8 Introduction This document will provide preliminary information about the both the

More information

EEM 486: Computer Architecture. Lecture 4. Performance

EEM 486: Computer Architecture. Lecture 4. Performance EEM 486: Computer Architecture Lecture 4 Performance EEM 486 Performance Purchasing perspective Given a collection of machines, which has the» Best performance?» Least cost?» Best performance / cost? Design

More information

Chapter 1: Introduction. What is an Operating System?

Chapter 1: Introduction. What is an Operating System? Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real -Time Systems Handheld Systems Computing Environments

More information

Chapter 2 Logic Gates and Introduction to Computer Architecture

Chapter 2 Logic Gates and Introduction to Computer Architecture Chapter 2 Logic Gates and Introduction to Computer Architecture 2.1 Introduction The basic components of an Integrated Circuit (IC) is logic gates which made of transistors, in digital system there are

More information

A Survey on ARM Cortex A Processors. Wei Wang Tanima Dey

A Survey on ARM Cortex A Processors. Wei Wang Tanima Dey A Survey on ARM Cortex A Processors Wei Wang Tanima Dey 1 Overview of ARM Processors Focusing on Cortex A9 & Cortex A15 ARM ships no processors but only IP cores For SoC integration Targeting markets:

More information

System requirements for A+

System requirements for A+ System requirements for A+ Anywhere Learning System: System Requirements Customer-hosted Browser Version Web-based ALS (WBA) Delivery Network Requirements In order to configure WBA+ to properly answer

More information

İSTANBUL AYDIN UNIVERSITY

İSTANBUL AYDIN UNIVERSITY İSTANBUL AYDIN UNIVERSITY FACULTY OF ENGİNEERİNG SOFTWARE ENGINEERING THE PROJECT OF THE INSTRUCTION SET COMPUTER ORGANIZATION GÖZDE ARAS B1205.090015 Instructor: Prof. Dr. HASAN HÜSEYİN BALIK DECEMBER

More information

Thread level parallelism

Thread level parallelism Thread level parallelism ILP is used in straight line code or loops Cache miss (off-chip cache and main memory) is unlikely to be hidden using ILP. Thread level parallelism is used instead. Thread: process

More information

ARM Microprocessor and ARM-Based Microcontrollers

ARM Microprocessor and ARM-Based Microcontrollers ARM Microprocessor and ARM-Based Microcontrollers Nguatem William 24th May 2006 A Microcontroller-Based Embedded System Roadmap 1 Introduction ARM ARM Basics 2 ARM Extensions Thumb Jazelle NEON & DSP Enhancement

More information

An Overview of Stack Architecture and the PSC 1000 Microprocessor

An Overview of Stack Architecture and the PSC 1000 Microprocessor An Overview of Stack Architecture and the PSC 1000 Microprocessor Introduction A stack is an important data handling structure used in computing. Specifically, a stack is a dynamic set of elements in which

More information

CSEE W4824 Computer Architecture Fall 2012

CSEE W4824 Computer Architecture Fall 2012 CSEE W4824 Computer Architecture Fall 2012 Lecture 2 Performance Metrics and Quantitative Principles of Computer Design Luca Carloni Department of Computer Science Columbia University in the City of New

More information

IA-64 Application Developer s Architecture Guide

IA-64 Application Developer s Architecture Guide IA-64 Application Developer s Architecture Guide The IA-64 architecture was designed to overcome the performance limitations of today s architectures and provide maximum headroom for the future. To achieve

More information

Unit A451: Computer systems and programming. Section 2: Computing Hardware 1/5: Central Processing Unit

Unit A451: Computer systems and programming. Section 2: Computing Hardware 1/5: Central Processing Unit Unit A451: Computer systems and programming Section 2: Computing Hardware 1/5: Central Processing Unit Section Objectives Candidates should be able to: (a) State the purpose of the CPU (b) Understand the

More information

LCMON Network Traffic Analysis

LCMON Network Traffic Analysis LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia adamblack@swin.edu.au Abstract The Swinburne

More information

picojava TM : A Hardware Implementation of the Java Virtual Machine

picojava TM : A Hardware Implementation of the Java Virtual Machine picojava TM : A Hardware Implementation of the Java Virtual Machine Marc Tremblay and Michael O Connor Sun Microelectronics Slide 1 The Java picojava Synergy Java s origins lie in improving the consumer

More information

WAR: Write After Read

WAR: Write After Read WAR: Write After Read write-after-read (WAR) = artificial (name) dependence add R1, R2, R3 sub R2, R4, R1 or R1, R6, R3 problem: add could use wrong value for R2 can t happen in vanilla pipeline (reads

More information

Microprocessor and Microcontroller Architecture

Microprocessor and Microcontroller Architecture Microprocessor and Microcontroller Architecture 1 Von Neumann Architecture Stored-Program Digital Computer Digital computation in ALU Programmable via set of standard instructions input memory output Internal

More information

! Metrics! Latency and throughput. ! Reporting performance! Benchmarking and averaging. ! CPU performance equation & performance trends

! Metrics! Latency and throughput. ! Reporting performance! Benchmarking and averaging. ! CPU performance equation & performance trends This Unit CIS 501 Computer Architecture! Metrics! Latency and throughput! Reporting performance! Benchmarking and averaging Unit 2: Performance! CPU performance equation & performance trends CIS 501 (Martin/Roth):

More information

Virtuoso and Database Scalability

Virtuoso and Database Scalability Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of

More information

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched

More information

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures 11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the

More information

18-447 Computer Architecture Lecture 3: ISA Tradeoffs. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 1/18/2013

18-447 Computer Architecture Lecture 3: ISA Tradeoffs. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 1/18/2013 18-447 Computer Architecture Lecture 3: ISA Tradeoffs Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 1/18/2013 Reminder: Homeworks for Next Two Weeks Homework 0 Due next Wednesday (Jan 23), right

More information

Computer Organization and Components

Computer Organization and Components Computer Organization and Components IS5, fall 25 Lecture : Pipelined Processors ssociate Professor, KTH Royal Institute of Technology ssistant Research ngineer, University of California, Berkeley Slides

More information

64-Bit versus 32-Bit CPUs in Scientific Computing

64-Bit versus 32-Bit CPUs in Scientific Computing 64-Bit versus 32-Bit CPUs in Scientific Computing Axel Kohlmeyer Lehrstuhl für Theoretische Chemie Ruhr-Universität Bochum March 2004 1/25 Outline 64-Bit and 32-Bit CPU Examples

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Chapter 4 System Unit Components. Discovering Computers 2012. Your Interactive Guide to the Digital World

Chapter 4 System Unit Components. Discovering Computers 2012. Your Interactive Guide to the Digital World Chapter 4 System Unit Components Discovering Computers 2012 Your Interactive Guide to the Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook

More information

AP ENPS ANYWHERE. Hardware and software requirements

AP ENPS ANYWHERE. Hardware and software requirements AP ENPS ANYWHERE Hardware and software requirements Contents Server requirements 3 Hard drives 5 Virtual machines 6 AP ENPS mobile server 6 Client requirements 7 AP ENPS client on a Mac-based computer

More information

Management Challenge. Managing Hardware Assets. Central Processing Unit. What is a Computer System?

Management Challenge. Managing Hardware Assets. Central Processing Unit. What is a Computer System? Management Challenge Managing Hardware Assets What computer processing and storage capability does our organization need to handle its information and business transactions? What arrangement of computers

More information

Communicating with devices

Communicating with devices Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.

More information

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

Seeking Opportunities for Hardware Acceleration in Big Data Analytics Seeking Opportunities for Hardware Acceleration in Big Data Analytics Paul Chow High-Performance Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Toronto Who

More information

EE361: Digital Computer Organization Course Syllabus

EE361: Digital Computer Organization Course Syllabus EE361: Digital Computer Organization Course Syllabus Dr. Mohammad H. Awedh Spring 2014 Course Objectives Simply, a computer is a set of components (Processor, Memory and Storage, Input/Output Devices)

More information

BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA

BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA BEAGLEBONE BLACK ARCHITECTURE MADELEINE DAIGNEAU MICHELLE ADVENA AGENDA INTRO TO BEAGLEBONE BLACK HARDWARE & SPECS CORTEX-A8 ARMV7 PROCESSOR PROS & CONS VS RASPBERRY PI WHEN TO USE BEAGLEBONE BLACK Single

More information

Imaging Computing Server User Guide

Imaging Computing Server User Guide Imaging Computing Server User Guide PerkinElmer, Viscount Centre II, University of Warwick Science Park, Millburn Hill Road, Coventry, CV4 7HS T +44 (0) 24 7669 2229 F +44 (0) 24 7669 0091 E cellularimaging@perkinelmer.com

More information

Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child

Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.

More information

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview

More information

CPU Performance Equation

CPU Performance Equation CPU Performance Equation C T I T ime for task = C T I =Average # Cycles per instruction =Time per cycle =Instructions per task Pipelining e.g. 3-5 pipeline steps (ARM, SA, R3000) Attempt to get C down

More information

The Future of the ARM Processor in Military Operations

The Future of the ARM Processor in Military Operations The Future of the ARM Processor in Military Operations ARMs for the Armed Mike Anderson Chief Scientist The PTR Group, Inc. http://www.theptrgroup.com What We Will Talk About The ARM architecture ARM performance

More information

Autodesk Revit 2016 Product Line System Requirements and Recommendations

Autodesk Revit 2016 Product Line System Requirements and Recommendations Autodesk Revit 2016 Product Line System Requirements and Recommendations Autodesk Revit 2016, Autodesk Revit Architecture 2016, Autodesk Revit MEP 2016, Autodesk Revit Structure 2016 Minimum: Entry-Level

More information

Quantifying Hardware Selection in an EnCase v7 Environment

Quantifying Hardware Selection in an EnCase v7 Environment Quantifying Hardware Selection in an EnCase v7 Environment Introduction and Background The purpose of this analysis is to evaluate the relative effectiveness of individual hardware component selection

More information

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture?

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture? This Unit: Putting It All Together CIS 501 Computer Architecture Unit 11: Putting It All Together: Anatomy of the XBox 360 Game Console Slides originally developed by Amir Roth with contributions by Milo

More information

XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April 2009. Page 1 of 12

XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April 2009. Page 1 of 12 XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines A.Zydroń 18 April 2009 Page 1 of 12 1. Introduction...3 2. XTM Database...4 3. JVM and Tomcat considerations...5 4. XTM Engine...5

More information

Computer Architecture. R. Poss

Computer Architecture. R. Poss Computer Architecture R. Poss 1 What is computer architecture? 2 Your ideas and expectations What is part of computer architecture, what is not? Who are computer architects, what is their job? What is

More information

Introduction to Operating Systems. Perspective of the Computer. System Software. Indiana University Chen Yu

Introduction to Operating Systems. Perspective of the Computer. System Software. Indiana University Chen Yu Introduction to Operating Systems Indiana University Chen Yu Perspective of the Computer System Software A general piece of software with common functionalities that support many applications. Example:

More information