Computer Architecture Basics

Size: px
Start display at page:

Download "Computer Architecture Basics"

Transcription

1 Computer Architecture Basics CIS 450 Computer Organization and Architecture Copyright c 2002 Tim Bower The interface between a computer s hardware and its software is its architecture The architecture is described by what the computer s instructions do, and how they are specified Understanding how it all works requires knowledge of the structure of a computer and its assembly language What is a computer? There are lots of machines in our world, but only some of those machines qualify as being a computer What features makes a machine a computer? The very first machines which bore the label of a computer were designed using electro-mechanical switches These switches were large The computers designed from them were more like automated adding machines than today s computers A program written for these early machines was entered into the computer by setting an array of relays to be either an electrical short or open circuit This was often accomplished with the aid of a panel of plug-in contact points and cables After setting the relays, the program could be executed To execute a new program, the cables needed to be moved to form a new network of relays With the invention of the vacuum tube in the 1940s, faster computers could be designed which could also run more complicated programs The real genesis of modern computers, however, came with the practice of storing a program in memory The possibility to storing much larger programs in memory became reality with the invention of ferrite core memory in the 1950s According to mathematician John von Neumann, for a machine to be a computer it must have the following: 1 Addressable memory that holds both instructions and data 2 An arithmetic logic unit 3 A program counter Put another way, it must be programmable A computer executes the following simple loop for each program pc= 0; do { instruction = memory[pc++]; decode( instruction ); fetch( operands ); execute; store( results ); } while( instruction!= halt ); Note: Instructions are the verbs and operands are the objects of this process In some architectures, such as the SPARC, the program counter is advanced by a set amount after each instruction is read In the Intel x86, however, the size of the instruction varies So as the instruction is read and decoded, the amount which the program counter should be advanced is also determined The important computer architecture components from von Neumann s stored program control computer are: CPU ALU Central processing unit The engine of the computer that executes programs Arithmetic logic unit This is the part of the CPU that executes individual instructions involving data (operands) 1

2 ALU Data (memory) registers IR Instructions PC CPU Computer Architecture Proposed by John von Neumann Register A memory location in the CPU which holds a fixed amount of data Registers of most current systems hold 32 bits or 4 bytes of data PC IR Acc Program counter, also called the instruction pointer, is a register which holds the memory address of the next instruction to be executed Instruction register A register which holds the current instruction being executed Accumulator A register designated to hold the result of an operation performed by the ALU Register File A collection of several registers Fundamental Computer Architectures Here we describe the most common Computer Architectures, all of which use stored program control The Stack Machine A stack machine implements a stack with registers The operands of the ALU are always the top two registers of the stack and the result from the ALU is stored in the top register of the stack Examples of the stack machine include Hewlett Packard RPN calculators and the Java Virtual Machine (JVM) The advantage of a stack machine is it can shorten the length of instructions since operands are implicit This was important when memory was expensive (20-30 years ago) Now, in Java, it is important since we want to ship executables (class files) over the network The Accumulator Machine An accumulator machine has a special register, called an accumulator, whose contents are combined with another operand as input to the ALU, with the result of the operation replacing the contents of the accumulator 2

3 Who is John von Neumann? John Louis von Neumann was born 28 December 1903 in Budapest, Hungary and Died 8 February 1957 in Washington DC He was a brilliant mathematician, synthesizer, and promoter of the stored program concept, whose logical design of the Institute for Advanced Studies (IAS) computer became the prototype of most of its successors - the von Neumann Architecture Von Neumann was a child prodigy, born into a banking family in Budapest, Hungary When only six years old he could divide eight-digit numbers in his head At a time of political unrest in central Europe, he was invited to visit Princeton University in 1930, and when the Institute for Advanced Studies was founded there in 1933, he was appointed to be one of the original six Professors of Mathematics, a position which he retained for the remainder of his life By the latter years of World War II von Neumann was playing the part of an executive management consultant, serving on several national committees, applying his amazing ability to rapidly see through problems to their solutions Through this means he was also a conduit between groups of scientists who were otherwise shielded from each other by the requirements of secrecy He brought together the needs of the Los Alamos National Laboratory (and the Manhattan Project) with the capabilities of the engineers at the Moore School of Electrical Engineering who were building the ENIAC, and later built his own computer called the IAS machine Several supercomputers were built by National Laboratories as copies of his machine Following the war, von Neumann concentrated on the development of the IAS computer and its copies around the world His work with the Los Alamos group continued and he continued to develop the synergism between computer capabilities and the needs for computational solutions to nuclear problems related to the hydrogen bomb His insights into the organization of machines led to the infrastructure which is now known as the von Neumann Architecture However, von Neumann s ideas were not along those lines originally; he recognized the need for parallelism in computers but equally well recognized the problems of construction and hence settled for a sequential system of implementation Through the report entitled First Draft of a Report on the EDVAC [1945], authored solely by von Neumann, the basic elements of the stored program concept were introduced to the industry In the 1950 s von Neumann was employed as a consultant to IBM to review proposed and ongoing advanced technology projects One day a week, von Neumann held court with IBM On one of these occasions in 1954 he was confronted with the FORTRAN concept John Backus remembered von Neumann being unimpressed with the concept of high level languages and compilers Donald Gillies, one of von Neumann s students at Princeton, and later a faculty member at the University of Illinois, recalled in the mid-1970 s that the graduates students were being used to hand assemble programs into binary for their early machine (probably the IAS machine) He took time out to build an assembler, but when von Neumann found out about it he was very angry, saying (paraphrased), It is a waste of a valuable scientific computing instrument to use it to do clerical work Source: history/vonneumannhtml 3

4 ALU Data (memory) ALU Data (memory) ACC Stack IR Instructions IR Instructions PC PC CPU Stack Machine Architecture CPU Accumulator Machine Architecture ALU Data (memory) Register File IR Instructions PC CPU Load/Store Machine Architecture 4

5 Example Machine Instructions y = y + 10; y &y [y ] *y = *&y = y Stack Machine push [y ] push 10 add pop y Accumulator Machine load [y ] add 10 store y Load/Store Machine load r0, [y ] load r1, 10 add r0, r1, r2 store r2, y accumulator = accumulator [op] operand; In fact, many machines have more than one accumulator Pentium: 1, 2, 4, or 6 (depending on how you count) MC68000: 16 In order to add two numbers in memory, 1 place one of the numbers into the accumulator (load operand) 2 execute the add instruction 3 store the contents of the accumulator back into memory (store operand) The Load/Store Machine Registers: provide faster access but are expensive Memory: provides slower access but is less expensive A small amount of high speed memory (expensive), called a register file, is provided for frequently accessed variables and a much larger slower memory (less expensive) is provided for the rest of the program and data (SPARC: 32 registers at any one time) This is based on the principle of locality at a given time, a program typically accesses a small number of variables much more frequently than others The machine loads and stores the registers from memory The arithmetic and logic instructions operate with registers, not main memory, for the location of operands Since the machine addresses only a small number of registers, the instruction field to refer to a register (operand) is short; therefore, these machines frequently have instructions with three operands: add src1, src2, dest Machine Instructions Machine instructions are classified into the following three categories: 1 data transfer operations (memory register, register register) 2 arithmetic logic operations (add, sub, and, or, xor, shift, etc) 3 program control operations (branch, call, interrupt) How the operands are specified is called the addressing mode We will discuss addressing modes more later 5

6 The Computer s Software The program instructions are stored in memory in machine code or machine language format An assembler is the program used to translate symbolic programs (assembly language) into machine language programs machine language Low level computer instructions that are encoded into binary words assembly language The lowest level human readable programming language All of the detailed instructions for the computer are listed Assembly programs are directly encoded into machine code Assembly code can be written by humans, but is more typically produced by a compiler high level language Humans typically write programs in a language which allows program logic to be expressed at a conceptual level, ignoring the implementation details which are required of assembly language programs Years ago, hardware efficiency was extracted at the expense of the programmer s time If a fast program was needed, then it was written in assembly language Compilers were capable of translating programs from high level languages, but they generated assembly language programs that were relatively inefficient as compared with the same programs written by a programmer in assembly language Programmers often found it necessary to optimize the assembly language code created by a compiler to improve the performance and reduce the memory requirements of the program This is no longer the case Compilers have improved to the point that they can generate code comparable to, or better than, the code most programmers can generate Even if hand crafted optimizations could improve the performance, there is little benefit derived from such a laborious activity Many computers today execute so fast and have enough memory that it is not necessary to optimize code at the assembly language level So, since it is increasingly rare for programmers to work at the assembly language level, why is it necessary to learn assembly language? There are actually several reasons to study assembly language 1 To understand or work on an operating system Operating systems need to execute instructions which can not be expressed in a high level language, so it is necessary that a portion of an operating system be written in assembly language Some instances when an operating system needs assembly language include: initializing the hardware and data in the CPU at boot time, handling interrupts, low level interfaces with hardware peripherals, and cases when a compiler s protection features interfere with the needed operations 2 To understand or work on a compiler 3 Real time or embedded systems programming where there may be critical constraints for a program related either to performance or available memory In some cases with embedded systems, a compiler may not be available 4 To understand the internal working of a computer Computer architecture can best be understood when assembly language is used to supplement the study of computer architecture Assembly language code does not hide details about what the computer is doing Complex Instruction Sets and Reduced Instruction Sets Another important classification of types of computer architectures relates to the available set of instructions for the processor Here we discuss the historical background and technical differences between two types of processors If memory is an expensive and limited resource, there is a large benefit in reducing the size of a program During the 1960s and 1970s, memory was at a premium Therefore, much effort was expended on minimizing the size of individual instructions and minimizing the number of instructions necessary to implement a program During this time period, almost all computer designers believed that rich instruction sets would simplify compiler design and improve the quality of computer architecture New instructions were developed to replace frequently used sequences of instructions For example, a loop variable is often decremented, followed by a branch operation if the result is positive New architectures therefore introduced a single instruction to decrement a variable and branch conditionally based on the result Some instructions came to be more like a procedure than a simple operation Some of these powerful single instructions 6

7 required four or more parameters As an example, the IBM System/370 has a single instruction that copies a character string of arbitrary length from any location in memory to any other location in memory, while translating characters according to a table stored in memory Computers which feature a large number of complex instructions are classified as complex instruction set computers (CISC) Other examples of CISC computers include the Digital Equipment VAX and the Intel x86 line of processors The DEC VAX has more than 200 instructions, dozens of distinct addressing modes and instructions with as many as six operands The complexity of CISC was accommodated by the introduction of microprogramming or microcode Microcode composed of low-level hardware instructions that implement high-level instructions required by an architecture Microcode was placed in ROM or control-store RAM (which is more expensive, but faster than the ferrite-core memory used in many computers) However, not all computer designers fell in line with the CISC philosophy Seymore Cray, for one, believed that complexity was bad, and continued to build the fastest computers in the world by using simple, register-oriented instruction sets Cray was a proponent of the Reduced Instruction Set Computer (RISC), which is the antidote to CISC The CDC 6600 and the Cray-1 supercomputer were the precursors of modern RISC architectures In 1975, Cray made the following remarks about his computer design: [Registers] made the instructions very simple That is somewhat unique Most machines have rather elaborate instruction sets involving many more memory references in the instructions than the machines I have designed Simplicity, I guess, is a way of saying it I am all for simplicity If it s very complicated, I cannot understand it Various technological changes in the 1980s made the architectural assumptions of the 1970s no longer valid Faster (10 times or more) and cheaper semiconductor memory and integrated circuits began to replace ferrite-core and transistor based discrete circuits The invention of cache memories substantially improved the speed of non-microcoded programs Compiler technology had progressed rapidly; optimizing compilers generate code that used only a small subset of most instruction sets A new set of simplified design criteria emerged: Instructions should be simple unless there is a good reason for complexity To be worthwhile, a new instruction that increases cycle time by 10% must reduce the total number of cycles executed by at least 10% Microcode is generally no faster than sequences of hardwired instructions Moving software into microcode does not make it better It just makes it harder to modify Fixed format instructions and pipelined 1 execution are more important than program size As memory becomes cheaper and faster, the space/time tradeoff resolved in favor of time reducing space no longer decreases time Compiler technology should simplify instructions, rather than generate more complex instructions Instead of adding a complicated microcoded instruction, optimizing compilers can generate sequences of simple, fast instructions to do the job Operands can be kept in registers to increase speed even faster What is RISC? Assembly language programs occasionally use large sets of machine instructions, whereas high level language compilers generally do not For example, SUN s C compiler uses only about 30% of the available Motorola instructions Studies show that approximately 80% of computations for a typical program requires only 20% of a processor s instruction set The designers of RISC machines strive for hardware simplicity, with close cooperation between machine architecture and compiler design In order to add a new instruction, computer architects must ask: 1 Pipelining relates to parallelizing the steps in the loop of instruction executing The next instruction is fetched and decoded while the current instruction is executing We will discuss pipelining more when we study the Sun SPARC architecture 7

8 to what extent would the added instruction improve performance and is it worth the cost of implementation? no matter how useful it is in an isolated instance, would it make all others perform more slowly by its mere presence? The goal of RISC architecture is to maximize the effective speed of a design by performing infrequent functions in software and by including in hardware only features that yield a net performance gain Performance gains are measured by conducting detailed studies of large high level language programs RISC architectures eliminate complicated instructions that require microcode support RISC Architecture The following characteristics are typical of RISC architectures Although none of these are required for an architecture to be called RISC, this list does describe most current RISC architectures, including the SPARC design 1 Single cycle execution: Most instructions are executed in a single machine cycle 2 Hardwired control with little or no microcode: Microcode adds a level of complexity and raises the number of cycles per instruction 3 Load/Store, register-to-register design: All computational instructions involve registers Memory accesses are made with only load and store instructions 4 Simple fixed-format instructions with few addressing modes: All instructions are the same length (typically 32 bits) and have just a few ways to address memory 5 Pipelining: The instruction set design allows for the processing of several instructions at the same time 6 High performance memory: RISC machines have at least 32 general purpose registers and large cache memory 7 Migration of functions to software: Only those features that measurably improve performance are implemented in hardware Software contains sequences of simple instructions for executing complex functions rather than complex instructions themselves, which improves system efficiency 8 More concurrency is visible to software: For example, branches take effect after execution of the following instruction, permitting a fetch of the next instruction during execution of the current instruction The real keys to enhanced performance are single-cycle execution and keeping the cycle time as short as possible Many characteristics of RISC architectures, such as load/store and register-to-register design, facilitate single-cycle execution Simple fixed-format instructions on the other hand, permit shorter cycles by reducing decoding time Early RISC Machines In the mid 1970s, some computer architects observed that even complex computers execute mostly simple instructions This observation led to work on the IBM 801 the first intentional RISC machine (even though the term RISC had yet to be coined) The term RISC was coined as part of David Patterson s 1980 course in microprocessor design at the University of California at Berkeley The RISC-I chip design was completed in 1982, and the RISC-II chip design was completed in 1984 The RISC-II was a 32-bit microprocessor with 138 registers, and a 330-ns (3 MHz) cycle time Without the aid of elaborate compiler technology, the RISC-II outperformed the VAX 11/780 at integer arithmetic 8

9 Memory bus CPU L2 Cache Main Memory I/O Devices Register file (disk) L1 Cache Registers L1 Cache L2 Cache Main Memory Disk Memory Size: Speed: 200 B 128 KB 256 KB 128 MB 5 ns 6 ns 10ns 100 ns 30 GB 5ms Memory Hierarchy Design Memory hierarchy design is based on three important principles: Make the common case fast Principle of locality Spatial locality refers to the fact that memory that is physically located closer to the CPU can be accessed faster Temporal locality refers to the tendency of programs to access the same data several times in a short period of time Smaller is faster These are the levels in a typical memory hierarchy Moving farther away from the CPU, the memory in the level becomes larger and slower When a memory lookup is required, the L1 cache is searched first If the data is found, this is called a hit If the data is not in L1 cache, this is called a miss and the L2 cache is checked If the data is not in the L2 cache, then the data is retrieved from main memory When there is a miss at either the L1 or L2 cache, the data retrieved from the next level is saved in the cache for future use Cache hits make the program run much faster than if all memory accesses must go to the main memory The connection between the CPU and main memory is called the front-side bus A common design is for the front-side bus to be divided into four channels If the front-side bus speed is listed at 800 MHz, it is probably four channels each running at 200 MHz The connection between the CPU and the L2 cache is called the backside bus Binary Representation of Data Here we briefly consider the format used to store data variables in memory and in registers If you need more details than is provided here, then check your notes from EECE 241 or other resources 9

10 Larger memory quantity Registers L1 Cache L2 Cache Main Memory Disk Faster, more expensive memory Integer Variables Unsigned variables that generally fall into the category of integers (char, short, int, long) are stored in straight binary format, beginning with all zeros for zero up to all ones for the largest number that can be represented by the data type The signed variables that generally fall into the category of integers (char,short,int,long)are stored in 2 s compliment format This ensures that the binary digits represent a continuous number line from the most negative number to the largest positive number with zero being represented with all zero bits The most significant bit is considered the sign bit The sign bit is one for negative numbers and zero for positive numbers Decimal int (hex) short (hex) -2,147,483,648 0x ,147,483,647 0x ,768 0xffff8000 0x ,767 0xffff8001 0x xfffffffe 0xfffe -1 0xffffffff 0xffff 0 0x x x x ,767 0x00007fff 0x7fff 2,147,483,647 0x7fffffff Any two binary numbers can thus be added together in a straight forward manner to get the correct answer If there is a carry bit beyond what the data type can represent, it is discarded 1 0x0001 +(-1) + 0xffff x0000 To change the sign of any number, invert all the bits and add 1 2 = 0x0002 = ==> = 0xfffe = -2 10

11 X+ 8 X X + 4 X + 5 X + 6 X X + 7 X + 6 X + 5 X + 4 X 0x12 0x34 0x56 0x78 0x12 0x34 0x56 0x78 X + 1 X + 2 X + 3 X + 3 X + 2 X + 1 X X 4 X 3 X 2 X 1 X 1 X 2 X 3 X 4 X 8 X 8 Big Endian Little Endian Memory at Address X contains 0x Big/Little Endian Memory Maps Conversions of Integer Variables It is often necessary to convert a smaller data type to a larger type For this, there are either special instructions (Intel x86), or a sequence of a couple simple instructions (Sun SPARC) to promote a variable to a larger data type If the variable is unsigned, then extra zeros are just filled into the most significant bits (movezx move - zero extending, for Intel x86) For signed variables, then the sign bit needs to be extended to fill the most significant bits (movesx move - sign extending, for Intel x86) 0x6fa1 ==> 0x00006fa1 (sign extend a positive number) 0xfffe ==> 0xfffffffe (sign extend a negative number) 0x9002 ==> 0xffff9002 (sign extend a negative number) Byte Order Not all computers store the bits (and bytes) of a variable in the same order The Intel x86 line of processors stores the least significant bit in the lowest memory address (right most position) and the most significant bit in the highest memory address This scheme is called Little Endian Sun SPARC and most other UNIX platforms do the opposite They store the most significant byte in the lowest memory address SPARC is thus considered a Big Endian machine In a TCP/IP packet, the first transmitted data is the most significant byte, thus the Internet is considered Big Endian The lowest memory address is considered the memory address for a variable Hence we see a difference between Little Endian and Big Endian when we draw memory maps With Little Endian (Intel) we label the location of an address on the right side of the map With Big Endian (SPARC), labels are placed on the left side of the map The term is used because of an analogy with the story Gulliver s Travels, in which Jonathan Swift imagined a never-ending fight between the kingdoms of the Big-Endians and the Little-Endians, whose only difference is in where they crack open a hard-boiled egg 11

12 Single Precision s exp mantissa bits Double Precision s exp mantissa bits IEEE FPS floating point formats Floating Point Variables Floating point variables have been represented in many different ways inside computers of the past But there is now a well adhered to standard for the representation of floating point variables The standard is known as the IEEE Floating Point Standard (FPS) Like scientific notation, FPS represents numbers with multiple parts, a sign bit, one part specifying the mantissa and a part representing the exponent The mantissa is represented as a signed magnitude integer (ie, not 2 s Compliment), where the value is normalized The exponent is represented as an unsigned integer which is biased to accommodate negative numbers An 8-bit unsigned value would normally have a range of 0 to 255, but 127 is added to the exponent, giving it a range of -126 to +127 Follow these steps to convert a number to FPS format 1 First convert the number to binary 2 Normalize the number so that there is one nonzero digit to the left of the binary place, adjusting the exponent as necessary 3 The digits to the right of the binary point are then stored as the mantissa starting with the most significant bits of the mantissa field Because all numbers are normalized, there is no need to store the leading 1 Note: Because the leading 1 is dropped, it is no longer proper to refer to the stored value as the mantissa In IEEE terms, this mantissa minus its leading digit is called the significand 4 Add 127 to the exponent and convert the resulting sum to binary for the stored exponent value For double precision, add 1023 to the exponent Be sure to include all 8 or 11 bits of the exponent 5 The sign bit is a one for negative numbers and a zero for positive numbers 6 Compilers often express FPS numbers in hexadecimal, so a quick conversion to hexadecimal might be desired Here are some examples using single precision FPS 35 = 111 (binary) = 111 x 2^1 sign = 0, significand = 1100, exponent = = 128 = FPS number (35) = = 0x

13 100 = (binary) = x 2^6 sign = 0, significand = , exponent = = 133 = FPS number (100) = = 0x42c80000 What decimal number is represented in FPS as 0xc ? Here we just reverse the steps 0xc = (binary) sign = 1; exponent = ; significand = exponent = 132 ==> = x 2^5 = = Floating Point Arithmetic Until fairly recently, floating point arithmetic was performed using complex algorithms with an integer ALU The main ALU in CPUs is still an integer arithmetic ALU However, in the mid-1980s, special hardware was developed to perform floating point arithmetic Intel, for example, sold a chip known as the which was a math co-processor to go along with the CPU Most people did not buy the because of the cost A major selling point of the was that the math co-processor was integrated onto the CPU, which eliminated the need to purchase a separate chip to get faster floating point arithmetic Floating point hardware usually has a special set of registers and instructions for performing floating point arithmetic There are also special instructions for moving data between memory or the normal registers and the floating point registers Most of the discussion in this class will focus on integer operations, but we will try to show at least a couple examples of floating point arithmetic Role of the Operating System The operating system (OS) is a program that allocates and controls the use of all system resources: the processor, the main memory, and all I/O devices In addition, the operating system allows multiple, independent programs to share computer resources while running concurrently But when we look at our programs (written in any language), we don t see any allowance for the operating system or any other program The code is written as if our program is the only program running So how is this accomplished? How does the operating system get control back from user programs to do its work? The answer relates to the tight coupling between key parts of thecodeintheoskernel, 2 the architecture of the CPU, and something called interrupts When a computer is turned on, or booted, the OS (Windows, Linux, Minix, Solaris, etc ), initializes the hardware and also builds critical data structures in memory Most of the data structures are used by the operating system kernel However, some of the data structures are loaded according to the specification of the CPU manufacturer This CPU specific data is used to switch processing from user programs and the kernel In the Intel x86, for example, two special registers in the CPU hold pointers to memory used when an interrupt is received When a hardware event occurs, such as when a key is pressed on the keyboard, a hardware interrupt is issued The CPU then reads a register to get a pointer to a stack where it will save some of the key register values to This is not the same stack that the user program uses The CPU then reads another register to get a pointer to a special table called the interrupt descriptor table It also checks with the interrupt hardware to get a vector for which interrupt occurred Then, based on which interrupt occurred and the information in the interrupt descriptor table, the CPU causes processing to switch from the running of a user level program to running a interrupt handler in the kernel All of the above described operations are done automatically by the 2 The kernel of an OS is the critical part of the OS that handles the lowest levels of the OS such as scheduling of processes, memory management, and device control It is not related to the user interface or utilities provided by the OS 13

14 CPU when an interrupt is received Thus, the reception of an interrupt is how user programs are suspended and processing switched to the kernel Once the kernel gets control, it will want to save more registers from the user program, handle the hardware event and check if work needs to be done related to internal operations such as memory or process management Then finally, the kernel will let a user program run again In doing so, it will restore some registers and issue a special instruction that causes the final registers to be restored and processing to switch back to the user program Since all the registers are restored, the user program never knows that it was interrupted There are three types of interrupts which the CPU recognizes Hardware Interrupt This is any type of hardware event such as a key pressed on the keyboard, a hard disk completing the reading or writing of data, or the reception of an ethernet packet, etc Many operating systems program a clock to issue interrupts at regular intervals so that the kernel is guaranteed to get control on a regular basis even if no hardware events occur and a user program never releases the CPU Software Interrupt When a user program needs to make a system call to the operating system, such as for I/O or to request more memory, it may issue a special instruction called a software interrupt to cause the CPU to switch processing to the kernel Trap A trap is issued by the CPU itself when it detects that something is wrong or needs special attention In most cases a trap is issued when a user program performs an illegal instruction such as a divide by zero error or illegal memory reference In the Sun SPARC, there are some traps which occur in normal processing of a program Most of the kernel s code is termed reentrant, meaning that additional interrupts may be received even while processing a previous interrupt There are special assembly language instructions to turn interrupts off or on Interrupts are turned off in critical sections of the kernel when an interrupt will cause memory corruption in the kernel When interrupts are turned off, interrupts are queued by the hardware and will be issued when interrupts are turned on again A critical concern in operating system design is knowing when to turn interrupts off and on Interrupts should be left on except when absolutely necessary Thus operating systems use clever algorithms to make as much of the kernel reentrant as possible More will be discussed about operating systems as related to computer architecture and assembly language later in the semester after more specifics of the processors and assembly language have been covered 14

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2 Lecture Handout Computer Architecture Lecture No. 2 Reading Material Vincent P. Heuring&Harry F. Jordan Chapter 2,Chapter3 Computer Systems Design and Architecture 2.1, 2.2, 3.2 Summary 1) A taxonomy of

More information

Instruction Set Architecture (ISA)

Instruction Set Architecture (ISA) Instruction Set Architecture (ISA) * Instruction set architecture of a machine fills the semantic gap between the user and the machine. * ISA serves as the starting point for the design of a new machine

More information

İSTANBUL AYDIN UNIVERSITY

İSTANBUL AYDIN UNIVERSITY İSTANBUL AYDIN UNIVERSITY FACULTY OF ENGİNEERİNG SOFTWARE ENGINEERING THE PROJECT OF THE INSTRUCTION SET COMPUTER ORGANIZATION GÖZDE ARAS B1205.090015 Instructor: Prof. Dr. HASAN HÜSEYİN BALIK DECEMBER

More information

Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level

Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level System: User s View System Components: High Level View Input Output 1 System: Motherboard Level 2 Components: Interconnection I/O MEMORY 3 4 Organization Registers ALU CU 5 6 1 Input/Output I/O MEMORY

More information

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? Inside the CPU how does the CPU work? what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? some short, boring programs to illustrate the

More information

MICROPROCESSOR AND MICROCOMPUTER BASICS

MICROPROCESSOR AND MICROCOMPUTER BASICS Introduction MICROPROCESSOR AND MICROCOMPUTER BASICS At present there are many types and sizes of computers available. These computers are designed and constructed based on digital and Integrated Circuit

More information

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer Computers CMPT 125: Lecture 1: Understanding the Computer Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University January 3, 2009 A computer performs 2 basic functions: 1.

More information

CPU Organization and Assembly Language

CPU Organization and Assembly Language COS 140 Foundations of Computer Science School of Computing and Information Science University of Maine October 2, 2015 Outline 1 2 3 4 5 6 7 8 Homework and announcements Reading: Chapter 12 Homework:

More information

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek Instruction Set Architecture or How to talk to computers if you aren t in Star Trek The Instruction Set Architecture Application Compiler Instr. Set Proc. Operating System I/O system Instruction Set Architecture

More information

Chapter 2 Logic Gates and Introduction to Computer Architecture

Chapter 2 Logic Gates and Introduction to Computer Architecture Chapter 2 Logic Gates and Introduction to Computer Architecture 2.1 Introduction The basic components of an Integrated Circuit (IC) is logic gates which made of transistors, in digital system there are

More information

LSN 2 Computer Processors

LSN 2 Computer Processors LSN 2 Computer Processors Department of Engineering Technology LSN 2 Computer Processors Microprocessors Design Instruction set Processor organization Processor performance Bandwidth Clock speed LSN 2

More information

18-447 Computer Architecture Lecture 3: ISA Tradeoffs. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 1/18/2013

18-447 Computer Architecture Lecture 3: ISA Tradeoffs. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 1/18/2013 18-447 Computer Architecture Lecture 3: ISA Tradeoffs Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 1/18/2013 Reminder: Homeworks for Next Two Weeks Homework 0 Due next Wednesday (Jan 23), right

More information

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Chapter 2 Basic Structure of Computers Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline Functional Units Basic Operational Concepts Bus Structures Software

More information

CSCI 4717 Computer Architecture. Function. Data Storage. Data Processing. Data movement to a peripheral. Data Movement

CSCI 4717 Computer Architecture. Function. Data Storage. Data Processing. Data movement to a peripheral. Data Movement CSCI 4717/5717 Computer Architecture Topic: Functional View & History Reading: Sections 1.2, 2.1, & 2.3 Function All computer functions are comprised of four basic operations: Data processing Data storage

More information

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX Overview CISC Developments Over Twenty Years Classic CISC design: Digital VAX VAXÕs RISC successor: PRISM/Alpha IntelÕs ubiquitous 80x86 architecture Ð 8086 through the Pentium Pro (P6) RJS 2/3/97 Philosophy

More information

CHAPTER 4 MARIE: An Introduction to a Simple Computer

CHAPTER 4 MARIE: An Introduction to a Simple Computer CHAPTER 4 MARIE: An Introduction to a Simple Computer 4.1 Introduction 195 4.2 CPU Basics and Organization 195 4.2.1 The Registers 196 4.2.2 The ALU 197 4.2.3 The Control Unit 197 4.3 The Bus 197 4.4 Clocks

More information

Administrative Issues

Administrative Issues CSC 3210 Computer Organization and Programming Introduction and Overview Dr. Anu Bourgeois (modified by Yuan Long) Administrative Issues Required Prerequisites CSc 2010 Intro to CSc CSc 2310 Java Programming

More information

Central Processing Unit (CPU)

Central Processing Unit (CPU) Central Processing Unit (CPU) CPU is the heart and brain It interprets and executes machine level instructions Controls data transfer from/to Main Memory (MM) and CPU Detects any errors In the following

More information

a storage location directly on the CPU, used for temporary storage of small amounts of data during processing.

a storage location directly on the CPU, used for temporary storage of small amounts of data during processing. CS143 Handout 18 Summer 2008 30 July, 2008 Processor Architectures Handout written by Maggie Johnson and revised by Julie Zelenski. Architecture Vocabulary Let s review a few relevant hardware definitions:

More information

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit

More information

CISC, RISC, and DSP Microprocessors

CISC, RISC, and DSP Microprocessors CISC, RISC, and DSP Microprocessors Douglas L. Jones ECE 497 Spring 2000 4/6/00 CISC, RISC, and DSP D.L. Jones 1 Outline Microprocessors circa 1984 RISC vs. CISC Microprocessors circa 1999 Perspective:

More information

CHAPTER 7: The CPU and Memory

CHAPTER 7: The CPU and Memory CHAPTER 7: The CPU and Memory The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides

More information

This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers

This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers This Unit: Floating Point Arithmetic CIS 371 Computer Organization and Design Unit 7: Floating Point App App App System software Mem CPU I/O Formats Precision and range IEEE 754 standard Operations Addition

More information

Chapter 5 Instructor's Manual

Chapter 5 Instructor's Manual The Essentials of Computer Organization and Architecture Linda Null and Julia Lobur Jones and Bartlett Publishers, 2003 Chapter 5 Instructor's Manual Chapter Objectives Chapter 5, A Closer Look at Instruction

More information

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu. Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University cwliu@twins.ee.nctu.edu.tw Review Computers in mid 50 s Hardware was expensive

More information

Computer Organization and Architecture

Computer Organization and Architecture Computer Organization and Architecture Chapter 11 Instruction Sets: Addressing Modes and Formats Instruction Set Design One goal of instruction set design is to minimize instruction length Another goal

More information

Chapter 7D The Java Virtual Machine

Chapter 7D The Java Virtual Machine This sub chapter discusses another architecture, that of the JVM (Java Virtual Machine). In general, a VM (Virtual Machine) is a hypothetical machine (implemented in either hardware or software) that directly

More information

Instruction Set Design

Instruction Set Design Instruction Set Design Instruction Set Architecture: to what purpose? ISA provides the level of abstraction between the software and the hardware One of the most important abstraction in CS It s narrow,

More information

Operating System Overview. Otto J. Anshus

Operating System Overview. Otto J. Anshus Operating System Overview Otto J. Anshus A Typical Computer CPU... CPU Memory Chipset I/O bus ROM Keyboard Network A Typical Computer System CPU. CPU Memory Application(s) Operating System ROM OS Apps

More information

1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal:

1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal: Exercises 1 - number representations Questions 1. Give the 16 bit signed (twos complement) representation of the following decimal numbers, and convert to hexadecimal: (a) 3012 (b) - 435 2. For each of

More information

Processor Architectures

Processor Architectures ECPE 170 Jeff Shafer University of the Pacific Processor Architectures 2 Schedule Exam 3 Tuesday, December 6 th Caches Virtual Memory Input / Output OperaKng Systems Compilers & Assemblers Processor Architecture

More information

Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit

Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit Decimal Division Remember 4th grade long division? 43 // quotient 12 521 // divisor dividend -480 41-36 5 // remainder Shift divisor left (multiply by 10) until MSB lines up with dividend s Repeat until

More information

ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER

ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER Pierre A. von Kaenel Mathematics and Computer Science Department Skidmore College Saratoga Springs, NY 12866 (518) 580-5292 pvonk@skidmore.edu ABSTRACT This paper

More information

Chapter 1 Computer System Overview

Chapter 1 Computer System Overview Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides

More information

Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1

Divide: Paper & Pencil. Computer Architecture ALU Design : Division and Floating Point. Divide algorithm. DIVIDE HARDWARE Version 1 Divide: Paper & Pencil Computer Architecture ALU Design : Division and Floating Point 1001 Quotient Divisor 1000 1001010 Dividend 1000 10 101 1010 1000 10 (or Modulo result) See how big a number can be

More information

An Introduction to the ARM 7 Architecture

An Introduction to the ARM 7 Architecture An Introduction to the ARM 7 Architecture Trevor Martin CEng, MIEE Technical Director This article gives an overview of the ARM 7 architecture and a description of its major features for a developer new

More information

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit. Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how

More information

1 Classical Universal Computer 3

1 Classical Universal Computer 3 Chapter 6: Machine Language and Assembler Christian Jacob 1 Classical Universal Computer 3 1.1 Von Neumann Architecture 3 1.2 CPU and RAM 5 1.3 Arithmetic Logical Unit (ALU) 6 1.4 Arithmetic Logical Unit

More information

Bachelors of Computer Application Programming Principle & Algorithm (BCA-S102T)

Bachelors of Computer Application Programming Principle & Algorithm (BCA-S102T) Unit- I Introduction to c Language: C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating

More information

Numbering Systems. InThisAppendix...

Numbering Systems. InThisAppendix... G InThisAppendix... Introduction Binary Numbering System Hexadecimal Numbering System Octal Numbering System Binary Coded Decimal (BCD) Numbering System Real (Floating Point) Numbering System BCD/Binary/Decimal/Hex/Octal

More information

Introducción. Diseño de sistemas digitales.1

Introducción. Diseño de sistemas digitales.1 Introducción Adapted from: Mary Jane Irwin ( www.cse.psu.edu/~mji ) www.cse.psu.edu/~cg431 [Original from Computer Organization and Design, Patterson & Hennessy, 2005, UCB] Diseño de sistemas digitales.1

More information

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic:

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic: Binary Numbers In computer science we deal almost exclusively with binary numbers. it will be very helpful to memorize some binary constants and their decimal and English equivalents. By English equivalents

More information

Computer Science 281 Binary and Hexadecimal Review

Computer Science 281 Binary and Hexadecimal Review Computer Science 281 Binary and Hexadecimal Review 1 The Binary Number System Computers store everything, both instructions and data, by using many, many transistors, each of which can be in one of two

More information

(Refer Slide Time: 00:01:16 min)

(Refer Slide Time: 00:01:16 min) Digital Computer Organization Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture No. # 04 CPU Design: Tirning & Control

More information

Instruction Set Architecture (ISA) Design. Classification Categories

Instruction Set Architecture (ISA) Design. Classification Categories Instruction Set Architecture (ISA) Design Overview» Classify Instruction set architectures» Look at how applications use ISAs» Examine a modern RISC ISA (DLX)» Measurement of ISA usage in real computers

More information

Management Challenge. Managing Hardware Assets. Central Processing Unit. What is a Computer System?

Management Challenge. Managing Hardware Assets. Central Processing Unit. What is a Computer System? Management Challenge Managing Hardware Assets What computer processing and storage capability does our organization need to handle its information and business transactions? What arrangement of computers

More information

MACHINE INSTRUCTIONS AND PROGRAMS

MACHINE INSTRUCTIONS AND PROGRAMS CHAPTER 2 MACHINE INSTRUCTIONS AND PROGRAMS CHAPTER OBJECTIVES In this chapter you will learn about: Machine instructions and program execution, including branching and subroutine call and return operations

More information

Computer Architectures

Computer Architectures Computer Architectures 2. Instruction Set Architectures 2015. február 12. Budapest Gábor Horváth associate professor BUTE Dept. of Networked Systems and Services ghorvath@hit.bme.hu 2 Instruction set architectures

More information

ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture

ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture ELE 356 Computer Engineering II Section 1 Foundations Class 6 Architecture History ENIAC Video 2 tj History Mechanical Devices Abacus 3 tj History Mechanical Devices The Antikythera Mechanism Oldest known

More information

Traditional IBM Mainframe Operating Principles

Traditional IBM Mainframe Operating Principles C H A P T E R 1 7 Traditional IBM Mainframe Operating Principles WHEN YOU FINISH READING THIS CHAPTER YOU SHOULD BE ABLE TO: Distinguish between an absolute address and a relative address. Briefly explain

More information

Introduction to Microprocessors

Introduction to Microprocessors Introduction to Microprocessors Yuri Baida yuri.baida@gmail.com yuriy.v.baida@intel.com October 2, 2010 Moscow Institute of Physics and Technology Agenda Background and History What is a microprocessor?

More information

Microprocessor and Microcontroller Architecture

Microprocessor and Microcontroller Architecture Microprocessor and Microcontroller Architecture 1 Von Neumann Architecture Stored-Program Digital Computer Digital computation in ALU Programmable via set of standard instructions input memory output Internal

More information

CSC 2405: Computer Systems II

CSC 2405: Computer Systems II CSC 2405: Computer Systems II Spring 2013 (TR 8:30-9:45 in G86) Mirela Damian http://www.csc.villanova.edu/~mdamian/csc2405/ Introductions Mirela Damian Room 167A in the Mendel Science Building mirela.damian@villanova.edu

More information

A single register, called the accumulator, stores the. operand before the operation, and stores the result. Add y # add y from memory to the acc

A single register, called the accumulator, stores the. operand before the operation, and stores the result. Add y # add y from memory to the acc Other architectures Example. Accumulator-based machines A single register, called the accumulator, stores the operand before the operation, and stores the result after the operation. Load x # into acc

More information

1 The Java Virtual Machine

1 The Java Virtual Machine 1 The Java Virtual Machine About the Spec Format This document describes the Java virtual machine and the instruction set. In this introduction, each component of the machine is briefly described. This

More information

Intel 8086 architecture

Intel 8086 architecture Intel 8086 architecture Today we ll take a look at Intel s 8086, which is one of the oldest and yet most prevalent processor architectures around. We ll make many comparisons between the MIPS and 8086

More information

COMPUTER SCIENCE AND ENGINEERING - Microprocessor Systems - Mitchell Aaron Thornton

COMPUTER SCIENCE AND ENGINEERING - Microprocessor Systems - Mitchell Aaron Thornton MICROPROCESSOR SYSTEMS Mitchell Aaron Thornton, Department of Electrical and Computer Engineering, Mississippi State University, PO Box 9571, Mississippi State, MS, 39762-9571, United States. Keywords:

More information

Lecture 2. Binary and Hexadecimal Numbers

Lecture 2. Binary and Hexadecimal Numbers Lecture 2 Binary and Hexadecimal Numbers Purpose: Review binary and hexadecimal number representations Convert directly from one base to another base Review addition and subtraction in binary representations

More information

Computer Architecture. Secure communication and encryption.

Computer Architecture. Secure communication and encryption. Computer Architecture. Secure communication and encryption. Eugeniy E. Mikhailov The College of William & Mary Lecture 28 Eugeniy Mikhailov (W&M) Practical Computing Lecture 28 1 / 13 Computer architecture

More information

ECE 0142 Computer Organization. Lecture 3 Floating Point Representations

ECE 0142 Computer Organization. Lecture 3 Floating Point Representations ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g.,

More information

CS101 Lecture 26: Low Level Programming. John Magee 30 July 2013 Some material copyright Jones and Bartlett. Overview/Questions

CS101 Lecture 26: Low Level Programming. John Magee 30 July 2013 Some material copyright Jones and Bartlett. Overview/Questions CS101 Lecture 26: Low Level Programming John Magee 30 July 2013 Some material copyright Jones and Bartlett 1 Overview/Questions What did we do last time? How can we control the computer s circuits? How

More information

The Central Processing Unit:

The Central Processing Unit: The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Objectives Identify the components of the central processing unit and how they work together and interact with memory Describe how

More information

150127-Microprocessor & Assembly Language

150127-Microprocessor & Assembly Language Chapter 3 Z80 Microprocessor Architecture The Z 80 is one of the most talented 8 bit microprocessors, and many microprocessor-based systems are designed around the Z80. The Z80 microprocessor needs an

More information

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C Embedded Systems A Review of ANSI C and Considerations for Embedded C Programming Dr. Jeff Jackson Lecture 2-1 Review of ANSI C Topics Basic features of C C fundamentals Basic data types Expressions Selection

More information

Chapter 01: Introduction. Lesson 02 Evolution of Computers Part 2 First generation Computers

Chapter 01: Introduction. Lesson 02 Evolution of Computers Part 2 First generation Computers Chapter 01: Introduction Lesson 02 Evolution of Computers Part 2 First generation Computers Objective Understand how electronic computers evolved during the first generation of computers First Generation

More information

Let s put together a Manual Processor

Let s put together a Manual Processor Lecture 14 Let s put together a Manual Processor Hardware Lecture 14 Slide 1 The processor Inside every computer there is at least one processor which can take an instruction, some operands and produce

More information

Memory Systems. Static Random Access Memory (SRAM) Cell

Memory Systems. Static Random Access Memory (SRAM) Cell Memory Systems This chapter begins the discussion of memory systems from the implementation of a single bit. The architecture of memory chips is then constructed using arrays of bit implementations coupled

More information

Design Cycle for Microprocessors

Design Cycle for Microprocessors Cycle for Microprocessors Raúl Martínez Intel Barcelona Research Center Cursos de Verano 2010 UCLM Intel Corporation, 2010 Agenda Introduction plan Architecture Microarchitecture Logic Silicon ramp Types

More information

Exception and Interrupt Handling in ARM

Exception and Interrupt Handling in ARM Exception and Interrupt Handling in ARM Architectures and Design Methods for Embedded Systems Summer Semester 2006 Author: Ahmed Fathy Mohammed Abdelrazek Advisor: Dominik Lücke Abstract We discuss exceptions

More information

The string of digits 101101 in the binary number system represents the quantity

The string of digits 101101 in the binary number system represents the quantity Data Representation Section 3.1 Data Types Registers contain either data or control information Control information is a bit or group of bits used to specify the sequence of command signals needed for

More information

The programming language C. sws1 1

The programming language C. sws1 1 The programming language C sws1 1 The programming language C invented by Dennis Ritchie in early 1970s who used it to write the first Hello World program C was used to write UNIX Standardised as K&C (Kernighan

More information

1. Convert the following base 10 numbers into 8-bit 2 s complement notation 0, -1, -12

1. Convert the following base 10 numbers into 8-bit 2 s complement notation 0, -1, -12 C5 Solutions 1. Convert the following base 10 numbers into 8-bit 2 s complement notation 0, -1, -12 To Compute 0 0 = 00000000 To Compute 1 Step 1. Convert 1 to binary 00000001 Step 2. Flip the bits 11111110

More information

COMPUTER ORGANIZATION AND ARCHITECTURE. Slides Courtesy of Carl Hamacher, Computer Organization, Fifth edition,mcgrawhill

COMPUTER ORGANIZATION AND ARCHITECTURE. Slides Courtesy of Carl Hamacher, Computer Organization, Fifth edition,mcgrawhill COMPUTER ORGANIZATION AND ARCHITECTURE Slides Courtesy of Carl Hamacher, Computer Organization, Fifth edition,mcgrawhill COMPUTER ORGANISATION AND ARCHITECTURE The components from which computers are built,

More information

picojava TM : A Hardware Implementation of the Java Virtual Machine

picojava TM : A Hardware Implementation of the Java Virtual Machine picojava TM : A Hardware Implementation of the Java Virtual Machine Marc Tremblay and Michael O Connor Sun Microelectronics Slide 1 The Java picojava Synergy Java s origins lie in improving the consumer

More information

TYPES OF COMPUTERS AND THEIR PARTS MULTIPLE CHOICE QUESTIONS

TYPES OF COMPUTERS AND THEIR PARTS MULTIPLE CHOICE QUESTIONS MULTIPLE CHOICE QUESTIONS 1. What is a computer? a. A programmable electronic device that processes data via instructions to output information for future use. b. Raw facts and figures that has no meaning

More information

Binary Numbers. Binary Octal Hexadecimal

Binary Numbers. Binary Octal Hexadecimal Binary Numbers Binary Octal Hexadecimal Binary Numbers COUNTING SYSTEMS UNLIMITED... Since you have been using the 10 different digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 all your life, you may wonder how

More information

MICROPROCESSOR. Exclusive for IACE Students www.iace.co.in iacehyd.blogspot.in Ph: 9700077455/422 Page 1

MICROPROCESSOR. Exclusive for IACE Students www.iace.co.in iacehyd.blogspot.in Ph: 9700077455/422 Page 1 MICROPROCESSOR A microprocessor incorporates the functions of a computer s central processing unit (CPU) on a single Integrated (IC), or at most a few integrated circuit. It is a multipurpose, programmable

More information

Learning Outcomes. Simple CPU Operation and Buses. Composition of a CPU. A simple CPU design

Learning Outcomes. Simple CPU Operation and Buses. Composition of a CPU. A simple CPU design Learning Outcomes Simple CPU Operation and Buses Dr Eddie Edwards eddie.edwards@imperial.ac.uk At the end of this lecture you will Understand how a CPU might be put together Be able to name the basic components

More information

More on Pipelining and Pipelines in Real Machines CS 333 Fall 2006 Main Ideas Data Hazards RAW WAR WAW More pipeline stall reduction techniques Branch prediction» static» dynamic bimodal branch prediction

More information

CPU Organisation and Operation

CPU Organisation and Operation CPU Organisation and Operation The Fetch-Execute Cycle The operation of the CPU 1 is usually described in terms of the Fetch-Execute cycle. 2 Fetch-Execute Cycle Fetch the Instruction Increment the Program

More information

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture?

This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture? This Unit: Putting It All Together CIS 501 Computer Architecture Unit 11: Putting It All Together: Anatomy of the XBox 360 Game Console Slides originally developed by Amir Roth with contributions by Milo

More information

Topics. Introduction. Java History CS 146. Introduction to Programming and Algorithms Module 1. Module Objectives

Topics. Introduction. Java History CS 146. Introduction to Programming and Algorithms Module 1. Module Objectives Introduction to Programming and Algorithms Module 1 CS 146 Sam Houston State University Dr. Tim McGuire Module Objectives To understand: the necessity of programming, differences between hardware and software,

More information

Computer Systems Design and Architecture by V. Heuring and H. Jordan

Computer Systems Design and Architecture by V. Heuring and H. Jordan 1-1 Chapter 1 - The General Purpose Machine Computer Systems Design and Architecture Vincent P. Heuring and Harry F. Jordan Department of Electrical and Computer Engineering University of Colorado - Boulder

More information

MACHINE ARCHITECTURE & LANGUAGE

MACHINE ARCHITECTURE & LANGUAGE in the name of God the compassionate, the merciful notes on MACHINE ARCHITECTURE & LANGUAGE compiled by Jumong Chap. 9 Microprocessor Fundamentals A system designer should consider a microprocessor-based

More information

Generations of the computer. processors.

Generations of the computer. processors. . Piotr Gwizdała 1 Contents 1 st Generation 2 nd Generation 3 rd Generation 4 th Generation 5 th Generation 6 th Generation 7 th Generation 8 th Generation Dual Core generation Improves and actualizations

More information

PROBLEMS (Cap. 4 - Istruzioni macchina)

PROBLEMS (Cap. 4 - Istruzioni macchina) 98 CHAPTER 2 MACHINE INSTRUCTIONS AND PROGRAMS PROBLEMS (Cap. 4 - Istruzioni macchina) 2.1 Represent the decimal values 5, 2, 14, 10, 26, 19, 51, and 43, as signed, 7-bit numbers in the following binary

More information

Levels of Programming Languages. Gerald Penn CSC 324

Levels of Programming Languages. Gerald Penn CSC 324 Levels of Programming Languages Gerald Penn CSC 324 Levels of Programming Language Microcode Machine code Assembly Language Low-level Programming Language High-level Programming Language Levels of Programming

More information

on an system with an infinite number of processors. Calculate the speedup of

on an system with an infinite number of processors. Calculate the speedup of 1. Amdahl s law Three enhancements with the following speedups are proposed for a new architecture: Speedup1 = 30 Speedup2 = 20 Speedup3 = 10 Only one enhancement is usable at a time. a) If enhancements

More information

An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008

An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008 An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008 Computer Science the study of algorithms, including Their formal and mathematical properties Their hardware realizations

More information

Instruction Set Architecture

Instruction Set Architecture Instruction Set Architecture Consider x := y+z. (x, y, z are memory variables) 1-address instructions 2-address instructions LOAD y (r :=y) ADD y,z (y := y+z) ADD z (r:=r+z) MOVE x,y (x := y) STORE x (x:=r)

More information

Pentium vs. Power PC Computer Architecture and PCI Bus Interface

Pentium vs. Power PC Computer Architecture and PCI Bus Interface Pentium vs. Power PC Computer Architecture and PCI Bus Interface CSE 3322 1 Pentium vs. Power PC Computer Architecture and PCI Bus Interface Nowadays, there are two major types of microprocessors in the

More information

Chapter 6. Inside the System Unit. What You Will Learn... Computers Are Your Future. What You Will Learn... Describing Hardware Performance

Chapter 6. Inside the System Unit. What You Will Learn... Computers Are Your Future. What You Will Learn... Describing Hardware Performance What You Will Learn... Computers Are Your Future Chapter 6 Understand how computers represent data Understand the measurements used to describe data transfer rates and data storage capacity List the components

More information

AC 2007-2027: A PROCESSOR DESIGN PROJECT FOR A FIRST COURSE IN COMPUTER ORGANIZATION

AC 2007-2027: A PROCESSOR DESIGN PROJECT FOR A FIRST COURSE IN COMPUTER ORGANIZATION AC 2007-2027: A PROCESSOR DESIGN PROJECT FOR A FIRST COURSE IN COMPUTER ORGANIZATION Michael Black, American University Manoj Franklin, University of Maryland-College Park American Society for Engineering

More information

Number Representation

Number Representation Number Representation CS10001: Programming & Data Structures Pallab Dasgupta Professor, Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur Topics to be Discussed How are numeric data

More information

IA-64 Application Developer s Architecture Guide

IA-64 Application Developer s Architecture Guide IA-64 Application Developer s Architecture Guide The IA-64 architecture was designed to overcome the performance limitations of today s architectures and provide maximum headroom for the future. To achieve

More information

(Refer Slide Time: 02:39)

(Refer Slide Time: 02:39) Computer Architecture Prof. Anshul Kumar Department of Computer Science and Engineering, Indian Institute of Technology, Delhi Lecture - 1 Introduction Welcome to this course on computer architecture.

More information

12. Introduction to Virtual Machines

12. Introduction to Virtual Machines 12. Introduction to Virtual Machines 12. Introduction to Virtual Machines Modern Applications Challenges of Virtual Machine Monitors Historical Perspective Classification 332 / 352 12. Introduction to

More information

Exceptions in MIPS. know the exception mechanism in MIPS be able to write a simple exception handler for a MIPS machine

Exceptions in MIPS. know the exception mechanism in MIPS be able to write a simple exception handler for a MIPS machine 7 Objectives After completing this lab you will: know the exception mechanism in MIPS be able to write a simple exception handler for a MIPS machine Introduction Branches and jumps provide ways to change

More information

Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines

Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines Operating System Concepts 3.1 Common System Components

More information

Monday January 19th 2015 Title: "Transmathematics - a survey of recent results on division by zero" Facilitator: TheNumberNullity / James Anderson, UK

Monday January 19th 2015 Title: Transmathematics - a survey of recent results on division by zero Facilitator: TheNumberNullity / James Anderson, UK Monday January 19th 2015 Title: "Transmathematics - a survey of recent results on division by zero" Facilitator: TheNumberNullity / James Anderson, UK It has been my pleasure to give two presentations

More information