CDA 3101: Introduction to Computer Hardware and Organization. Supplementary Notes

Similar documents
CHAPTER 3 Boolean Algebra and Digital Logic

3.Basic Gate Combinations

BOOLEAN ALGEBRA & LOGIC GATES

The string of digits in the binary number system represents the quantity

Digital Electronics Detailed Outline

1. True or False? A voltage level in the range 0 to 2 volts is interpreted as a binary 1.

Logic in Computer Science: Logic Gates

Levent EREN A-306 Office Phone: INTRODUCTION TO DIGITAL LOGIC

Digital Logic Design. Basics Combinational Circuits Sequential Circuits. Pu-Jen Cheng

Introduction. The Quine-McCluskey Method Handout 5 January 21, CSEE E6861y Prof. Steven Nowick

Karnaugh Maps & Combinational Logic Design. ECE 152A Winter 2012

EE 261 Introduction to Logic Circuits. Module #2 Number Systems

2.0 Chapter Overview. 2.1 Boolean Algebra

Logic Reference Guide

Oct: 50 8 = 6 (r = 2) 6 8 = 0 (r = 6) Writing the remainders in reverse order we get: (50) 10 = (62) 8

plc numbers Encoded values; BCD and ASCII Error detection; parity, gray code and checksums

Digital Design. Assoc. Prof. Dr. Berna Örs Yalçın

Simplifying Logic Circuits with Karnaugh Maps

Number Representation

Boolean Algebra Part 1

United States Naval Academy Electrical and Computer Engineering Department. EC262 Exam 1

CSEE 3827: Fundamentals of Computer Systems. Standard Forms and Simplification with Karnaugh Maps

Lecture 2. Binary and Hexadecimal Numbers

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic:

Base Conversion written by Cathy Saxton

Chapter 1: Digital Systems and Binary Numbers

Gates, Circuits, and Boolean Algebra

Counters and Decoders

DEPARTMENT OF INFORMATION TECHNLOGY

Computer Science 281 Binary and Hexadecimal Review

Upon completion of unit 1.1, students will be able to

Karnaugh Maps. Circuit-wise, this leads to a minimal two-level implementation

Binary Representation. Number Systems. Base 10, Base 2, Base 16. Positional Notation. Conversion of Any Base to Decimal.

Digital Electronics Part I Combinational and Sequential Logic. Dr. I. J. Wassell

CSE140: Components and Design Techniques for Digital Systems

Systems I: Computer Organization and Architecture

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Switching Algebra and Logic Gates

CSI 333 Lecture 1 Number Systems

CSE140: Midterm 1 Solution and Rubric

Combinational circuits

Take-Home Exercise. z y x. Erik Jonsson School of Engineering and Computer Science. The University of Texas at Dallas

2011, The McGraw-Hill Companies, Inc. Chapter 3

Binary Adders: Half Adders and Full Adders

Lecture 8: Synchronous Digital Systems

=

BINARY CODED DECIMAL: B.C.D.

Two-level logic using NAND gates

COMBINATIONAL CIRCUITS

Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Click on the links below to jump directly to the relevant section

ELEC EXPERIMENT 1 Basic Digital Logic Circuits

Binary, Hexadecimal, Octal, and BCD Numbers

Chapter 4 Register Transfer and Microoperations. Section 4.1 Register Transfer Language

CDA 3200 Digital Systems. Instructor: Dr. Janusz Zalewski Developed by: Dr. Dahai Guo Spring 2012

Useful Number Systems

Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Module 3: Floyd, Digital Fundamental

Section 1.4 Place Value Systems of Numeration in Other Bases

Numbering Systems. InThisAppendix...

Number and codes in digital systems

Solution for Homework 2

5.1 Radical Notation and Rational Exponents

A single register, called the accumulator, stores the. operand before the operation, and stores the result. Add y # add y from memory to the acc

LSN 2 Number Systems. ECT 224 Digital Computer Fundamentals. Department of Engineering Technology

Boolean Algebra. Boolean Algebra. Boolean Algebra. Boolean Algebra

The finite field with 2 elements The simplest finite field is

Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

COMBINATIONAL and SEQUENTIAL LOGIC CIRCUITS Hardware implementation and software design

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Digital Logic Design

A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions

SECTION C [short essay] [Not to exceed 120 words, Answer any SIX questions. Each question carries FOUR marks] 6 x 4=24 marks

Boolean Algebra (cont d) UNIT 3 BOOLEAN ALGEBRA (CONT D) Guidelines for Multiplying Out and Factoring. Objectives. Iris Hui-Ru Jiang Spring 2010

Unit 3 Boolean Algebra (Continued)

C H A P T E R. Logic Circuits

Counters are sequential circuits which "count" through a specific state sequence.

CH3 Boolean Algebra (cont d)

1.6 The Order of Operations

Numeral Systems. The number twenty-five can be represented in many ways: Decimal system (base 10): 25 Roman numerals:

A Course Material on DIGITAL PRINCIPLES AND SYSTEM DESIGN

HOMEWORK # 2 SOLUTIO

The components. E3: Digital electronics. Goals:

ETEC 2301 Programmable Logic Devices. Chapter 10 Counters. Shawnee State University Department of Industrial and Engineering Technologies

Quotient Rings and Field Extensions

Flip-Flops, Registers, Counters, and a Simple Processor

CHAPTER 5 Round-off errors

Basic Logic Gates Richard E. Haskell

Designing Digital Circuits a modern approach. Jonathan Turner

Chapter 4: Computer Codes

2) Write in detail the issues in the design of code generator.

3 Some Integer Functions

Math 3000 Section 003 Intro to Abstract Math Homework 2

PROGRAMMABLE LOGIC CONTROLLERS Unit code: A/601/1625 QCF level: 4 Credit value: 15 TUTORIAL OUTCOME 2 Part 1

Let s put together a Manual Processor

Review of Fundamental Mathematics

Transcription:

CDA 3: Introduction to Computer Hardware and Organization Supplementary Notes Charles N. Winton Department of Computer and Information Sciences University of North Florida Jacksonville, FL 32224-2645 Levels of organization of a computer system: a) Electronic circuit level b) Logic level - combinational logic*, sequential logic*, register-transfer logic* c) Programming level - microcode programming*, machine/assembly language programming, high-level language programming d) Computer systems level - systems hardware* (basic hardware architecture and organization - memory, CPU (ALU, control unit), I/, bus structures), systems software, application systems * topics discussed in these notes Objectives: Understand computer organization and component logic Boolean algebra and truth table logic Integer arithmetic and implementation algorithms IEEE floating point standard and floating point algorithms Register contruction Memory construction and organization Register transfer logic CPU organization Machine language instruction implementation Develop a foundation for Computer architecture Microprocessor interfacing System software Sections: combinational logic sequential logic computer architecture 25

Contents Section I - Logic Level: Combinational Logic... Table of binary operations... 3 Graphical symbols for logic gates... 4 Representing data... 6 2 s complement representation... Gray code... 5 Boolean algebra... 6 Canonical forms... 22 Σ and Π notations... 23 NAND-NOR conversions... 23 Circuit analysis... 25 Circuit simplification: K-maps... 25 Circuit design... 33 Gray to binary decoder... 35 BCD to 7-segment display decoder... 36 Arithmetic circuits... 39 AOI gates... 42 Decoders/demultiplexers... 43 Multiplexers... 44 Comparators... 46 Quine-McCluskey procedure... 48 Section II - Logic Level: Sequential Logic... 5 Set-Reset (SR) latches... 5 Edge-triggered flip-flops... 54 An aside about electricity... 56 (Ohms s Law, resistor values, batteries, AC) D-latches and D flip-flops... 58 T flip-flops and JK flip-flops... 6 Excitation controls... 6 Registers... 64 Counters... 65 Sequential circuit design finite state automata... 66 Counter design... 7 Moore and Mealy circuits... 72 Circuit analysis... 72 Additional counters... 74 Barrel shifter... 77 Glitches and hazards... 78 Constructing memory... 83 International Unit Prefixes (base )... 88 Circuit implementation using ROMs... 89 Hamming Code... 93 Section III Computer Systems Level... 96 Representing numeric fractions... 96 IEEE 754 Floating Point Standard... 98 Register transfer logic... Register transfer language (RTL)... 2 UNF RTL... 6 Signed multiply architecture and algorithm... 2 Booth s method... 4 Restoring and non-restoring division... 7 Implementing floating point using UNFRTL... 25 Computer organization... 28 Control unit... 29 Arithmetic and Logic unit... 29 CPU registers... 3 Single bus CPU organization... 3 Microcode signals... 32 Microprograms... 34 Branching... 36 Microcode programming... 37 Other machine language instructions... 37 Index register... 4 Simplified Instructional Computer (SIC)... 43 Architectural enhancements... 44 CPU-memory synchronization... 46 Inverting microcode... 48 Vertical microcode... 49 Managing the CPU and peripheral devices... 49 The Z8... 52

Page Logic level: Combinational Logic Combinational logic is characterized by functional specifications using only binary valued inputs and binary valued outputs r input variables X combinational... logic... s output variables Z Z=f(X) (Z is a function of X) Remark: for given values of r and s, the number of possible functions is finite since both the domain and the range of functions are finite, of size 2 r and 2 s respectively (this is because the r input variables and the s output variables assume only the binary values and ). Although finite, it is worth noting that in practice the number of functions is usually quite large: For example, for r = 5 input variables and s = output variable, the domain consists of the 2 5 = 32 possible input combinations of the two binary input values and. To specify a function, each of these 32 possible input combinations must be assigned a value in the range, which consists of the two binary output values and. This yields 2 32 = 4 billion such functions of 5 variables! In general, with r input variables and s output variables, the domain consists of the k = 2 r combinations of the binary input values. The range consists of the j = 2 s combinations of the binary output values. To specify a function, each of the j input combinations must be assigned to of k possible values in the range. Since there are j k possible ways to do this, there are j k functions having r inputs and s outputs. Each such function corresponds to a logic circuit having r (binary-valued) inputs and s (binary-valued) outputs. When r = 2 input variables and s = output variable, there are 2 4 = 6 possible functions (circuits), each having the basic appearance X Y f Z = f(x,y) Recall that functions of 2 variables are called binary operations. For the usual algebra of numbers these include the familiar operations of addition, subtraction, multiplication, and division and as many more as we might care to define.

Page 2 For circuit logic, the input variables are restricted to the values and, so there are only 4 possible input combinations of X and Y, yielding exactly 6 possible binary operations. The corresponding logic circuits provide fundamental building blocks for more complex logic circuits. Such fundamental circuits are termed logic gates. Since there are only 6 of them, they can be listed out - see overleaf. They are named for ease of reference and to reflect common terminology. It should be noted that some of the binary operation are "degenerate." In particular, Zero(X,Y) and One(X,Y) depend on neither X nor Y to determine their output; X(X,Y) and NOT X(X,Y) have output determined strictly by X; Y(X,Y) and NOT Y(X,Y) have output determined strictly by Y. X and NOT X operations (or Y and NOT Y, for that matter) are usually thought of as unary operations (functions of variable) rather than degenerate binary operations. As unary operations they are respectively termed the "identity" and the "complement".

Page 3 TABLE OF BINARY OPERATIONS Inhibit X Inhibit Y X Y Zero AND on Y= X on X= Y XOR OR NOR COINC NOT Y Y X NOT X X Y NAND One

The complement (or NOT) is designated by an overbar; e.g., complement of X. X is the Page 4 The other most commonly employed binary operations for combinational logic also have notational designations; e.g., AND is designated by, e.g., X Y OR is designated by +, e.g., X + Y NAND is designated by, e.g., X Y NOR is designated by, e.g., X Y XOR is designated by, e.g., X Y COINCIDENCE is designated by u, e.g., X u Y. Note that if we form the simple composite function f (NOT f, or the complement of f), that f(x) = f( X) and = f = f Moreover, X Y = X Y = X Y(NAND NOT AND) - Sheffer stroke X Y = X + Y (NOR = NOT OR) - Pierce arrow X u Y = X Y (COINC = complement of XOR) In particular, NAND and AND, OR and NOR, XOR and COINC are respectively complementary in the sense that each is respectively the complement of the other. Rather than use a general graphical "logic gate" designation X Y Z = f(x,y) ANSI (American National Standards Institute) has standardized on the following graphical symbols for the most commonly used logic gates. AND ( ) NAND ( ) XOR ( ) OR (+) NOR ( ) COINC (u) NOT

Page 5 Composite functions such as f(g(x)) can be easily represented using these symbols; e.g., consider the composite f(a,b,c,d) = ((AB) C)u((A C) D) This is easily represented as a 3-level circuit diagrammed by: A B C.. f(a,b,c,d) D The level of a circuit is the maximal number of gates an input signal has to travel through to establish the circuit output. Normally, both an input signal and it's inverse are assumed to be available, so the NOT gate on B does not count as a 4 th level for the circuit. Note that the behavior of the above circuit can be totally determined by evaluating its behavior for each possible input combination (we'll return to determining its values later): A B C D f(a,b,c,d) Note that this table provides an exhaustive specification of the logic circuit more compactly given by the above algebraic expression for f. Its form corresponds to the "truth" tables used in symbolic logic. For small circuits, the truth table form of specifying a logic function is often used. The inputs to a logic circuit typically represent data values encoded in a binary format as a sequence of 's and 's. The encoding scheme may be selected to facilitate manipulation of the data. For example, if the data is numeric, it is normally encoded to facilitate performing arithmetic operations. If the data is alphabetic

Page 6 characters, it may be encoded to facilitate operations such as sorting. There are also encoding schemes to specifically facilitate effective use of the underlying hardware. A single input line is normally used to provide a single data bit of information to a logic circuit, representing the binary values of or. At the hardware level, and are typically represented by voltage levels; e.g., by voltage L ("low") and by voltage H ("high"). For the TTL (Transistor-Transistor Logic) technology, H = +5V and L = OV (H is also referenced as V cc - "common cathode" and L as GND or "ground"). Representing Data There are three fundamental types of data that must be considered: logical data (the discrete truth values - True and False) numeric data (the integers and real numbers) character data (the members of a defined finite alphabet) Logical data representation: There is no imposed standard for representing logical data in computer hardware and software systems, but a single data bit is normally used to represent a logical data item in the context, of logic circuits, with "True" represented by and "False" by. This is the representation implicitly employed in the earlier discussion of combinational logic circuits, which are typically implementations of logic functions described via the mechanisms of symbolic logic. If the roles of and are reversed ( representing True and representing False), then the term negative logic is used to emphasize the change in representation for logical data. Numeric data: The two types of numeric data, integers real numbers are represented very differently. The representation in each case must deal with the fact that a computing environment is inherently finite. Integers: When integers are displayed for human consumption we use a "base representation. This requires us to establish characters which represent the base digits. Since we have ten fingers, the natural human base is ten and the Arabic characters,, 2, 3, 4, 5, 6, 7, 8, 9 are used to represent the base digits. Since logic circuits deal with binary inputs ( or ), the natural base in this context is two. Rather than invent new characters, the first two base ten characters ( and )

Page 7 are used to represent the base two digits. Any integer can be represented in any base, so long as we have a clear understanding of which base is being used and know what characters represent its digits. For example, 9 indicates a base ten representation of nineteen. In base two it is represented by 2. When dealing different bases, it is important to be able to convert from the representation in one base to that of the other. Note that it is easy to convert from base 2 to base, since each base 2 digit can be thought of as indicating the presence or absence of a power of 2. 2 = 2 4 + 2 3 + 2 2 + 2 + 2 = 6 + + + 2 + = 9 = + 9 A conversion from base to base 2 is more difficult but still straight forward. It can be handled "bottom-up" by repeated division by 2 until a quotient of is reached, the remainders determining the powers of 2 that are present: 9/2 = 9 R (2 is present) 9/2 = 4 R (2 is present) 4/2 = 2 R (2 2 is not present) 2/2 = R (2 3 is not present) /2 = R (2 4 is present) The conversion can also be handled "top-down" by iteratively subtracting out the highest power of 2 present until a difference of is reached: 9-6 = 3 () (6=2 4 is present so remove 6) no 8's () ( 8=2 3 is not present in what's left) no 4's () ( 4=2 2 is not present) 3-2 = () ( 2=2 is present so remove 2) - = () ( =2 is present in what's left) Bases which are powers of 2 are particularly useful for representing binary data since it is easy to convert to and from among them. The most commonly used are base 8 (octal) which uses as base digits,, 2, 3, 4, 5, 6, 7 and base 6 (hexadecimal) which uses as base digits,, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F where A, B, C, D, E, F are the base digits for ten, eleven, twelve, thirteen, fourteen, and fifteen. An n-bit binary item can easily be viewed in the context of any of base 2, base 8, or base 6 simply by appropriately grouping the bits; for example, the 28 bit binary item

Page 8 4 6 5 3 4 2 4 C D 5 C 4 is easily seen to be 4653424 8 = CD5C4 6 when the bits are grouped as indicated (using a calculator that handles base conversions, you can determine that the base ten value is 25334932 ; note that such calculators are typically limited to ten base 2 digits, but handle 8 hexadecimal digits, effectively extending the range of the calculator to 32 bits when the hexadecimal digits are viewed as 4-bit chunks). Since it is easier to read a string of hexadecimal (hex) digits than a string of 's and 's, and the conversion to and from base 6 is so straightforward, digital information of many bits is frequently displayed using hex digits (or sometimes octal, particularly for older equipment). Since digital circuits generally are viewed as processing binary data, a natural way to encode integers for the use by such circuits is to use fixed blocks of n bits each; in particular, 32-bit integers are commonly used (i.e., n = 32). In general, an n-bit quantity may be viewed as naturally representing one of the 2 n integers in the range [, 2 n- ] in its base 2 form. For example, for n = 5, there are 2 5 = 32 such numbers. The 5-bit representations of these numbers in base 2 form are 2 = l 2 =... 2 = 3 Note that as listed, the representation does not provide for negative numbers. One strategy to provide for negative numbers is to mimic the "sign-magnitude" approach normally used in everyday base representation of integers. For example, -273 explicitly exhibits as separate entries the sign and the magnitude of the number. A sign-magnitude representation strategy could use the first bit to represent the sign ( for +, for -). While perhaps satisfactory for everyday paper and pencil use, this strategy has awkward characteristics that weigh against it. First of all, the operation of subtraction is algorithmically vexing even for base paper and pencil exercises. For example, the subtraction problem 23-34 is typically handled not by subtracting 34 from 23, but by first subtracting 23 from 34, exactly the opposite of what the problem is asking for! Even worse, is represented twice (e.g., when n = 5, is represented by both and ). Conceptually, the subtraction problem above can be viewed as the addition problem 23 + (-34 ). However, adding the corresponding sign-magnitude

Page 9 representations as base 2 quantities will yield an incorrect result in many cases. Since numeric data is typically manipulated computationally, the representation strategy should facilitate, rather than complicate, the circuitry designed to handle the data manipulation. For these reasons, when n bits are used, the resulting 2 n binary combinations are viewed as representing the integers modulo 2 n, which inherently provides for negative integers and well-defined arithmetic (modulo 2 n ). The last statement needs some explanation. First observe that in considering the number line... -2 3... -2-2... 2 3 -... truncation of the binary representation for any non-negative integer i to n bits results in i mod 2 n. Note that an infinite number of non-negative integers (precisely 2 n apart from each other) truncate to a given particular value in the range [, 2 n -]; i.e., there are 2 n such groupings, corresponding to,, 2,..., 2 n -. Negative integers can be included in each grouping simply by taking integers 2 n apart without regard to sign. These groupings are called the "residue classes modulo 2 n. Knowing any member of a residue class is equivalent to knowing all of them (just adjust up or down by multiples of 2 n to find the others, or for non-negative integers truncate the base 2 representation at n bits to find the value in the range [, 2 n -]). In other words, the 2 n residue classes represented by,, 2,..., 2 n - provide a (finite) algebraic system that inherits its algebraic properties from the (infinite) integers, which justifies the viewpoint that this is a natural way to represent integer data in the context of a finite environment. Note that negative integers are implicitly provided for algebraically, since each algebraic entity (residue class) has an inverse under addition. For example, with n = 5, adding the mod 2 5 residue classes for 7 and 25 yields [25 ] + [7 ] = [32 ] = [ ], so [25 ] = [-7 ] Returning to the computing practice point of view of identifying the residue classes with the 5-bit representations of,, 2,..., 2 5 - in base 2 form, the calculation becomes 2 + 2 = 2 (truncated to 5 bits). The evident extension of this observation is that n-bit base 2 addition conforms exactly to addition modulo 2 n, a fact that lends itself to circuit implementation. Again referring to the number line... -6... -2-2... 5 6... 3 32 consider for n = 5 the following table exhibiting in base the 32 residue classes modulo 2 5. Each residue class is matched to the 5 bit

Page representation corresponding to its base value in the range,, 2,..., 3: 5-bit residue class representation {..., -32,, 32,... } = [] {..., -3,, 33,... } = [] {..., -3, 2, 34,... } = [2]... {..., -7, 5, 47,... } = [5] {..., -6, 6, 48,... } = [6] = [-6]... {..., -2, 3, 62,... } = [3] = [-2] {..., -, 3, 63,... } = [3] = [-] Evidently, the 5-bit representations with a leading viewed as base 2 integers best represent the integers,,..., 5. The 5-bit representations with a leading best represent -6, -5,..., -2, -. This representation is called the 5-bit 2's complement representation. It provides for, 5 positive integers, and 6 negative integers. Since data normally originates in sign-magnitude form, an easy means is needed to convert to/from the sign-magnitude form. An examination of the table leads to the conclusion that finding the magnitude for a negative value in 5-bit 2's complement form can be accomplished by subtracting from 32 ( ) and truncating the result. In general, this follows from the mod 2 5 residue class equivalences, -[i] = [-i] + [ ] = [-i] + [32 ] = [-i + 32 ] = [32 -i] which demonstrates that subtracting from 32 and truncating the result will always result in the representation for -i. -i is called the 2's complement of i. One way to subtract from 32 is to subtract from (which is 3) and then add (all in base 2). This is equivalent to inverting each bit and then adding (in base 2) to the overall result. There is nothing special in this discussion that requires 5 bits; i.e., the same rationale is equally applicable to an n-bit environment. Hence, in general, to find the 2's complement of an integer represented in n-bit 2's complement form, invert its bits and add (in base 2). Example : Determine the 8-bit 2's complement representation of -37. First, the magnitude of -37 is given by 37 = 2 which is in 8-bit 2's complement form. The representation for -37 is then given by the 2's complement of 37, obtained by inverting the bits of the 8-bit representation of the magnitude and adding ; i.e.,

Page + -37 = in 8-bit 2's complement form. Example 2: Determine the (base ) value of the 9 bit 2's complement integers i = j = s = i + j For i, since the lead bit is, the sign is + and the magnitude of the number is directly given by its representation as a base 2 integer; i.e., i = 27. For j, since the lead bit is, the number is negative, so its magnitude is given by -j. Inverting j's bits and adding gives + = 38 = -j (j's magnitude); i.e., j = -38. i+j (which we now know is - ) can be computed directly using ordinary base 2 addition modulo 2 9 ; i.e., i: = 27 j: + = -38 i+j: = - Example 2 illustrates that only circuitry for base 2 addition needs to be developed to perform addition and subtraction on integers represented in n-bit 2's complement form. Historically, a variation closed related to n-bit 2's complement, namely, n-bit 's complement has also been used for integer representation in computing devices. The 's complement of an n-bit block of 's and 's is obtained by inverting each bit. For this representation, arithmetic still requires only addition, but whenever there is a carry out of the sign position (and no overflow has occurred), must be added to the result (a so-called "end-around carry", something easily achieved at the hardware level). For example, in 8-bit 's complement 38 = -27 = = (end-around carry of carry-out) Note that the end-around carry is only used when working in 's complement. Integers do not have to be represented in n-bit blocks. Another representation format is Binary Coded Decimal (BCD), where each

Page 2 decimal digit of the base representation of the number is separately represented using its 4-bit binary (base 2) form. The 4-bit forms are = = 2 =... 9 = so in BCD, 27 is represented in 8 bits by 2 7 83 is represented in 2 bits by 8 3 BCD is obviously a base representation strategy. It has the advantage of being close to a character representation form (discussed below). When used in actual implementation, it is employed in sign-magnitude form (the best known of which is IBM's packed decimal form, which maintains the sign in conjunction with the last digit to accommodate the fact that the number of bits varies from number to number). Since there is no clear choice as to how to represent the sign, we will not address the sign-magnitude form further in the context of discussing BCD. It is possible to build BCD arithmetic circuitry, but it is more complex than that used for 2's complement. The arithmetic difficulties associated with BCD can easily be seen by considering what happens when two decimal digits are added whose sum exceeds 9. For example, adding 9 and 4 using ordinary base 2 yields = 9 = 4 = 3 which differs from, which is 3 in BCD. 3 Achieving the correct BCD result from the base 2 result requires adding a correction (+6 = 2 ); e.g., + = 3 in BCD. In general, a correction of 6 is required whenever the sum of the two digits exceeds 9. Hence, the circuitry has to allow for the fact that

Page 3 sometimes a correction factor is required and sometimes not. Since a BCD representation is normally handled using sign-magnitude, subtraction is an added problem to cope with. Real numbers: Real numbers are normally represented in a format deriving from the idea of the decimal expansion, which is used in paper and pencil calculations to provide rational approximations to real numbers (this is termed a "floating point representation, since the base point separating the integer part from the fractional part may shift as operations are performed on the number). There is a defined standard for representing real numbers, the IEEE 754 Floating Point Standard, whose discussion will be deferred until later due to its complexity. An alternate representation for real numbers is to fix the number of allowed places after the base point (a so-called "fixed point representation ) and use integer arithmetic. Since the number of places is fixed, the base point does not need to be explicitly represented (i.e., it is an "implied base point"). The result of applying arithmetic operations such as multiplication and division typically requires the use of additional (hidden) positions after the base point to accurately represent the result since a fixed point format truncates any additional positions resulting from multiplication or division. For this reason precision is quickly lost, further limiting the practicality of using this format. Character representation: Character data is defined by a finite set, its alphabet, which provides the character domain. The characters of the alphabet are represented as binary combinations of 's and 's. If 7 (ordered) bits are used, then the 7 bits provide 28 different combinations of 's and 's. Thus 7 bits provide encodings for an alphabet of up to 28 characters. If 8 bits are employed, then the alphabet may have as many as 256 characters. There are two defined standards in use in this country for representing character data: ASCII (American Standard Code for Information Interchange) EBCDIC (Extended Binary Coded Decimal Interchange Code). ASCII has a 7-bit base definition, and an 8-bit extended version providing additional graphics characters. (table page 2) In each case the standard prescribes an alphabet and its representation. Both standards have representation formats that make conversion from character form to BCD easy (for each character representing a decimal digit, the last 4 bits are its BCD representation). The representation is chosen so that when viewed in numeric ascending order, the corresponding characters follow the desired ordering for the defining alphabet, which means a numeric sort procedure can also be used for character sorting needs. Since character strings typically encompass many bits, character data is usually represented using hex digits rather than binary.

Page 4 For example, the text string "CDA 3" is represented by and C 3 C 4 C 4 F 3 F F F in EBCDIC C D A spc 3 4 3 4 4 4 2 3 3 3 3 3 in ASCII (or ASCII-8). Since characters are the most easily understood measure for data capacity, an 8-bit quantity is termed a byte of storage and data storage capacities are given in bytes rather than bits or some other measure. 2 = 24 bytes is called a K-byte, 2 2 =,48,576 bytes is called a megabyte, 2 3 bytes is called a gigabyte, 2 2 bytes is called a terabyte, and so forth. Other representation schemes: BCD is an example of a weighted representation scheme that utilizes the natural weighting of the binary representation of a number; i.e., w 3 d 3 + w 2 d 2 + w d + w d where the digits d i are just or and the weights are w 3 =8, w 2 =4, w =2, w =. Since only of the possible 6 combinations are used, w 3 is for all but 2 cases (8 and 9). A variation uses w 3 =2 to form what is known as "242 BCD". w 3 = for,,2,3,4 and w 3 = for 5,6,7,8,9. A major advantage over regular BCD is that the code is "selfcomplementing" in the sense that flipping the bits produced the 9's complement. Example: subtraction by using addition a subtraction such as 654-47 is awkward because of the need to borrow. The computation can be done by using addition if you think in terms of 654+(999-47)-999 = 654+529-999 = 83-999 = 83+ = 84. 999-47 = 529 is called the "9's complement" of 47, so the algorithm to do a subtraction A-B is. form the 9's complement (529) of the subtrahend B (47) 2. add it to the minuend A (654) 3. discard the carry and add (corresponding to the end-around carry of 's complement) Note that no subtraction circuitry is needed, but the technique does need an easy way to get the 9's complement. With 242 BCD, 47 = and the 9's complement of 47 is 529 = Addition is still complicated as can be seen by adding 6+5 which is + = carry (i.e., ordinary binary addition fails). A final BCD code, "excess-3 BCD", is also self-complementing. It is simply ordinary BCD + 3, so for the above example, with excess-3, 47 = and the 9's complement of 47 is 529 =.

Page 5 The lesson to learn is that codes must be formulated to represent data in a computer, and different representations are employed for different purposes; e.g., 2's complement is a number representation that facilitates arithmetic in base 2 BCD is another number representation that facilitates translation of numbers to decimal character form but complicates arithmetic ASCII represents characters in a manner that facilitates uppercase/lower-case adjustment and ease of conversion of decimal characters Other schemes such as "242 BCD" and "excess-3 BCD" seek to improve decimal arithmetic by facilitating use of 9's complement to avoid subtraction Sometimes representation schemes are designed to facilitate other tasks, such as representing graphical data elements or for tracking. For example, Gray Code is commonly used for identifying sectors on a rotating disk. Gray code is defined recursively by using the rule: to form the n+ bit representation from the n-bit representation preface the n-bit representation by append to this the n-bit representation in reverse order prefaced by Hence, the, 2, and 3-bit representations are Consider three concentric disks shaded as follows:

Page 6 The shading provides a gray code identification for 8 distinct wedgeshaped sections on the disk. As the disk rotates from one section to the next, no more than one digit position (represented by shaded and unshaded segments) changes, simplifying the task of determining the id of the next section when going from one section to the next. Note that this is a characteristic of the gray code. In contrast, note that in regular binary for the transition from 3 to 4, to, all 3 digits change, which means hardware tracking the change if this representation was used would potentially face arbitrary intermediate patterns in the transition from section 3 to section 4, complicating the process of to determining that 4 is the id of the next section (e.g., something such as a delay would have to be added to the control circuitry to allow the transition to stabilize). For a disk such as above, a row of 3 reflectance sensors, one for each concentric band, can be used to track the transitions. Boolean algebra: Boolean algebra is the algebra of circuits, the algebra of sets, and the algebra of truth table logic. A Boolean algebra has two fundamental elements, a "zero" and a "one," whose properties are described below. For circuits "zero" is designated by or L (for low voltage) and "one" by or H (for high voltage). For sets, "zero" is the empty set and "one" is the set universe. For truth table logic, "zero" is designated by F (for false) and "one" by T (for true). Just as the algebraic properties of numbers are described in terms of fundamental operations (addition and multiplication), the algebraic properties of a Boolean algebra are described in terms of basic Boolean operations. For circuits, the basic Boolean operations are ones we ve already discussed AND ( ), OR (+), and complement ( ) For sets the corresponding operations are intersection ( ), union ( ), and set complement. For truth table logic they are AND ( ), OR ( ), and NOT (~). Recall that AND and OR are binary operations (an operation requiring two arguments), while complement is a unary operation (an operation requiring one argument).

For circuits, also recall that the multiplication symbol is used for AND the addition symbol + is use for OR the symbol for complement is an overbar; i.e., complement of X. X Page 7 designates the The utilization of for AND and + for OR is due to the fact that these Boolean operations have algebraic properties similar to (but definitely not the same as) those of multiplication and addition for ordinary numbers. Basic properties for Boolean algebras (using the circuit operation symbols, rather than those for sets or for symbolic logic) are as follows:. Commutative property: + and are commutative operations; e.g., X + Y = Y + X and X Y = Y X In contrast to operations such as subtraction and division, a commutative operation has a left-right symmetry, permitting us to ignore the order of the operation's operands. 2. Associative property: + and are associative operations; e.g., X + (Y + Z) = (X + Y) + Z and X (Y Z) = (X Y) Z Non-associative operations (such as subtraction and division) tend to cause difficulty precisely because they are nonassociative. The property of associativity permits selective omission of parentheses, since the order in which the operation is applied has no effect on the outcome; i.e., we can just as easily write X + Y + Z as X + (Y + Z) or (X + Y) + Z since the result is the same whether we first evaluate X + Y or Y + Z. 3. Distributive property: distributes over + and + distributes over ; e.g., X (Y + Z) = (X Y) + (X Z) and also X + (Y Z) = (X + Y) (X + Z) With the distributive property we see a strong departure from the algebra of ordinary numbers which definitely does not have the property of + distributing over. The distributive property illustrates a strong element of symmetry that occurs in Boolean algebras, a characteristic known as duality. 4. Zero and one: there is an element zero () and an element one () such that for every X, X + = and X =

5. Identity: is an identity for + and is an identity for ; e.g., Page 8 X + = X and X = X for every X 6. Complement property: every element X has a complement that X + X = and X X = X such The complement of is and vice-versa; it can be shown that in general complements are unique; i.e., each element has exactly one complement. 7. Involution property (rule of double complements): for each X, = X = = X 8. Idempotent property: for every element X, X + X = X and X X = X 9. Absorption property: for every X and Y, X + (X Y) = X and X + ( X Y) = X + Y Anything "AND"ed with X is absorbed into X under "OR" with X. Anything "AND"ed with X is absorbed in its entirety under "OR" with X.. DeMorgan property: for every X and Y, X Y = X + Y and X + Y = X Y The DeMorgan property describes the relationship between "AND" and "OR", which with the rule of double complements, allows expressions to be converted from use of "AND"s to use of "OR"s and vice-versa; e.g., X + Y = = X == == = + Y = X Y X Y = = X == == = Y = X + Y Some of these properties can be proven from others (i.e., they do not constitute a minimal defining set of properties for Boolean algebras); for example, the idempotent rule X + X = X can be obtained by the manipulation X + X = X + (X ) = X by the absorption property. The DeMorgan property provides rules for using NANDs and NORs (where NAND stands for "NOT AND" and NOR stands for "NOT OR"). The operation NAND (sometimes called the Sheffer stroke) is denoted by

Page 9 X Y = X Y and the operation NOR (sometimes called the Pierce arrow) is denoted by X + Y = X Y Utilizing the rule of double complements and the DeMorgan property, any expression can be written in terms of the complement operation and or the complement operation and. Moreover, since the complement can be written in terms of either or ; i.e., X = X X = X X any Boolean expression can be written solely in terms of either or solely in terms of. This observation is particularly significant for a circuit whose function is represented by a Boolean expression, since this property of Boolean algebra implies that the circuit construction can be accomplished using as basic circuit elements only NAND circuits or only NOR circuits. Note that properties such as commutative and associative are also a characteristic of the algebra of numbers, but others, such as the idempotent and DeMorgan properties are not; i.e., Boolean algebra, the algebra of circuits, has behaviors quite different from what we are used to with numbers. Just as successfully working with numbers requires gaining understanding of their algebraic properties, working with circuits requires gaining understanding of Boolean algebra. In working with numbers, just as we often omit writing the times symbol in formulas, we may omit the AND symbol in formulas. Examples:. There is no cancellation; i.e., XY = XZ does not imply that Y = Z (if it did, the idempotent property XX = X = X would imply that X =!) 2. Complements are unique To see this just assume that Y is also a complement for X; i.e., X + Y = and XY =. AND the st equation through with X to get X X + Y X = X Since X X =, this reduces to Y X = X Similarly, since X + X = and XY =, XY + X Y = Y reduces to X Y = Y Putting the last two lines together we have X = Y 3. The list of properties is not minimal; e.g., Given that the properties other than the idempotent property are true, then it can be shown that the idempotent property is also true as follows: X + X =, so using the distributive property, XX + XX = X which in turn leads to

Page 2 XX = X since XX = A similar argument can be used to show that X + X = X Given that the properties other than the absorption property are true, then it can be shown that the absorption property is also true as follows: Since + Y =, X + XY = X, the st absorption criteria Starting from X + X = we get XY + XY = Y Adding X to both sides we get X + XY + XY = X + Y By the first absorption criteria this reduces to X + XY = X + Y, which is the 2 nd absorption criteria The DeMorgan property has great impact on circuit equations, since it provides the formula for converting from OR to NAND and from AND to NOR. The above proofs are by logical deduction. For a 2-element Boolean algebra, proof can be done exhaustively be examining all cases; e.g., we can verify DeMorgan by means of a "truth table": X Y X Y X Y X + Y X + Y This is called a "brute force" method for verifying the equation X + Y = X Y because it exhaustively checks every case using the definition of the AND, OR and NOT operations. Since AND and OR are associative, we can write X Y Z and X + Y + Z unparenthesized. It can be shown that and X Y Z = X X + Y + Z = Y Z + + X Y Z This leads to the "generalized DeMorgan property": X X 2... X n = X + X 2 +... + X n X + X 2+... + X n = X X 2... X n which is often useful for circuits of more than 2 variables. There are multi-input NAND gates to take advantage of this property. WARNING: NAND and NOR are not associative.

Page 2 Consider the truth table: X Y Z X Y Z X Y Y Z X Y Z X Y Z X Y Z (X (Y Z)) ((X Y) Z) It is evident that (X (Y Z)) ((X Y) Z) X Y Z Similarly (X (Y Z)) ((X Y) Z) X + Y + Z This means that care must be taken in grouping the NAND ( ) and NOR ( ) operators in algebraic expressions! The other two common binary operations, XOR ( ) and COINC (u) are both associative. X Y Z X Y Y Z (X Y) Z X (Y Z) XuY YuZ (XuY)uZ Xu(YuZ) Generalized operations (multi-input) serve to reduce the number of levels in a circuit; e.g., a 3 input AND is a -level circuit for XYZ equivalent to the 2-level circuit (XY)Z: 2-level (XY)Z X Y Z -level XYZ X Y Z

Page 22 Canonical forms: Any combinational circuit, regardless of the gates used, can be expressed in terms of combinations of AND, OR, and NOT. The most general form of this expression is called a canonical form. There are two types: the canonical sum of products the canonical product of sums Formulating these turns out to be quite easy if the truth table for the circuit is constructed. For example, consider a circuit f(x,y,z) with specification: X Y Z f(x,y,z) X Y Z X Y Z X Y Note that f(x,y,z) = X Y Z + X Y Z + X Y Z Each of these terms is obtained just by looking at the combinations for which f(x,y,z) is. Each of these is call a minterm. There are 8 possible minterms for 3 variables (see below). Analogously, for the combinations for which f(x,y,z) is we get f(x,y,z) = (X+Y+ Z)(X+ Y+Z)(X+ Y+ Z)( X+Y+Z)( X+ Y+ Z) Each of these terms is obtained just by looking at the combinations for which f(x,y,z) is. Each of these is call a maxterm. There are 8 possible maxterms for 3 variables (see below). The minterms and maxterms are numbered from corresponding to the binary combination they represent. X Y Z minterms maxterms. X Y Z X+Y+Z. X Y Z X+Y+ Z 2. X Y Z X +Y+ Z 3. X YZ X+ Y + Z 4. X Y Z X +Y+Z 5. XYZ X+Y+ Z 6. XYZ X+ Y+Z 7. XYZ X+ Y + Z Z

Page 23 Note that the maxterms are just the complements of their corresponding minterms. Representing a function by using its minterms is called the canonical sum of products and by using its maxterms the canonical product of sums; i.e., f(x,y,z) = X Y Z + X Y Z + X Y Z is the canonical sum of products and f(x,y,z) = (X+Y+ Z)(X+ Y+Z)(X+ Y+ Z)( X+Y+Z)( X+ Y+ Z) is the canonical product of sums for the function f(x,y,z). The short-hand notation (Σ-notation) f(x,y,z) = Σ(,5,6) is used for the canonical sum of products. Similarly the short-hand notation (Π-notation) f(x,y,z) = Π(,2,3,4,7) is used for the canonical product of sums. Canonical representations are considered to be 2-level representations, since for most circuits a signal and its opposite are both available as inputs. A combinational circuit's behavior is specified by one of truth table listing the outputs for every possible combination of input values canonical representation of the outputs using Σ or Π notation circuit diagram using logic gates Converting to NANDS or NORS: For a Boolean algebra, notice that the complement X is given by (X X) Since XY is given by the complement of (X Y) we have XY = (X Y) (X Y) By DeMorgan X + Y = = X == == = + Y = X Y = (X X)(Y Y) Hence, we can describe and equation using AND, OR, and complement solely in terms of NANDS using the above conversions. Similarly, for NOR we have the conversions - X = (X X) X+Y = (X Y) (X Y) XY = = X = Y = X + Y = (X X)(Y Y) (By DeMorgan) By DeMorgan, a NAND gate is equivalent to ( X Y = X + Y ) and a NOR gate is equivalent to ( X + Y = X Y)

Page 24 Using these equivalences, an OR-AND (product of sums) combination can be converted to NOR-NOR as follows: OR-AND NOR-NOR Other equivalences to OR-AND that follow from this one are NAND-AND and AND-OR as follows: NAND-AND NAND-AND For the sum of products (AND-OR) we have the counterpart equivalences: AND-OR NAND-NAND NOR-OR OR-NAND

Page 25 At this point, if given a truth table, or a representation using Σ or Π notation, we can generate a 2-level circuit diagram as the canonical sum of products or product of sums. Similarly, given a circuit diagram, we can produce its truth table. This process is called circuit analysis. For example, recall that the circuit equation, f(a,b,c,d) = ((AB) C)u((A C) D) was earlier represented as a 3-level circuit diagrammed by: A. A B (A B ) C B C. A C f(a,b,c,d)= ((AB) C)u((A C) D (A C) D D ) From the circuit equation we can obtain the truth table as follows, conforming to the value given earlier A B C D f(a,b,c,d) AB (AB) C A C (A C) D ((AB) C )u((a C) D) From the truth table f(a,b,c,d) = Σ(,5,,5) = Π(,2,3,4,6,7,8,9,,2,3,4) Note that the canonical representations are not as compact as the original circuit equation. Circuit simplification: A circuit represented in a canonical form (usually by Σ or Π notation) can usually be simplified. There are 3 techniques commonly employed: algebraic reduction Karnaugh maps (K-maps) Quine-McCluskey method

Page 26 Algebraic reduction is limited by the extent to which one is able to observe potential combinations in examining the equation; e.g., ABCD + ABCD + ABCD = ABCD + ABCD + ABCD + ABCD (idempotent) = A BD( C + C) + ( A + A)BCD (distributive) = A BD + BCD (complement) = ABD + BCD (identity) This is a minimal 2-level representation for the circuit. The further algebraic reduction to ( A + C )BD produces a 2-level circuit dependent only on 2-input gates. The Quine-McCluskey method is an extraction from the K-map approach abstracted for computer implementation. It is not dependent on visual graphs and is effective no matter the number of inputs. Since it does not lend itself to hand implementation for more than a few variables, it will only be discussed later and in sketchy detail. For circuits with no more than 4 or 5 input variables, K-maps provide a visual reduction technique for effectively reducing a combinational circuit to a minimal form. The idea for K-maps is to arrange minterms whose value is (or maxterms whose value is ) on a grid so as to locate patterns which will combine. For a -variable map, input variable X, the minterm locations are as follows: X X X While a -variable map is not useful, it is worth including to round out the discussion of maps using more variables. For a 2-variable map, input variables X, Y has minterm locations X Y X Y X Y X Y X Y In general we only label the cells according to the binary number they correspond to in the truth table (the number used by the Σ or Π notations). The map structure is then: X Y 2 3

For example, if we have f(x,y) = Σ(,3), we mark the minterms for and 3 in the 2-variable map as follows: Page 27 X Y 2 3 Now we can graphically see that a reduction is possible by delineating the adjacent pair of minterms (corresponding to X Y + XY), which in fact reduces to Y. Notice that there are visual clues: the over the column corresponds to Y and the looking down vertically, the and "cancel". 2-variable K-maps also are not particularly useful, but again are illustrative. With 3-variables, the pattern is X YZ 3 2 4 5 7 6 The key thing to note is that the order across the top follows the Gray code pattern so that there is exactly one - matchup between each column, including a match between the st and 4 th columns. For the function f(x,y,z) = Σ(,3,4,6), the K-map is X YZ 3 2 4 5 7 6 f(x,y,z) = X Z + X Z The st term of the reduced form for f(x,y,z) is in the X row (flagged by ) and the 2 nd is in the X row (flagged by ). In each case the Y term cancels since it is the one with matched to. Pay particular attention to the box that wraps around.

Page 28 For a more complex example, consider f(x,y,z) = Σ(,3,4,5) X YZ 3 2 4 5 7 6 Here f(x,y,z) can be reduced to either of the following f(x,y,z) = X Z + X Y f(x,y,z) = XZ + XY + Y Z Not that the term Y Z is "redundant" since its 's are covered by the other two terms. The first expression is called a minimal sum of products expression for f(x,y,z) since it cannot be reduced further. For combinational circuits, the redundant term can be omitted, but sometimes in the context of sequential circuits, where intermediate values matter, it must be left in. With 4-variables, the K-map pattern is AB CD 3 2 4 5 2 3 8 9 7 6 5 4 Now the Gray code pattern of the rows must also be present for the columns. More complex situations can also arise; for example, AB CD 3 2 4 5 7 6 2 3 5 4 8 9

Page 29 describes f(a,b,c,d) = Σ(,2,6,7,8,9,3,5). There are two patterns present that produce a minimal number of terms: AB CD 3 2 AB CD 3 2 4 5 7 6 4 5 7 6 2 3 5 4 2 3 5 4 8 9 8 9 Hence, either of the following produces a minimal sum of produces expression: from the rows f(a,b,c,d) = A B D + A BC + ABD + A B C from the columns f(a,b,c,d) = B C D + ACD + BCD + ACD In either case we know we have the function since all 's are covered. When working with maxterms, the 's of the function are what is considered. For the function above, f(a,b,c,d) = Π(,3,4,5,,,2,4) and the K-map is AB CD 3 2 4 5 7 6 2 3 5 4 8 9 leading to the following two minimal product of sums expressions: f(a,b,c,d) = (A+B+ D )(A+ B +C)(A + B +D)( A +B+ C ) from the rows f(a,b,c,d) = ( B +C+D)(A+C+ D )(B+ C + D )( A + C +D) from the columns. Be sure to observe that when working with maxterms, "barred" items correspond to 's and unbarred items correspond to 's, exactly the opposite of what is done when working with minterms. Just as a 4-variable K-map is formed by combining two 3-variable maps, a 5-variable K-map can be formed by combining two 4-variable maps

Page 3 (conceptually, on top of the other, representing and for the 5 th variable). In general, blocks of size 2 n are the ones that can be reduced. Here are blocks of size 4 on a 4-variable K-map: AB CD 3 2 AB CD 3 2 4 5 7 6 4 5 7 6 2 3 5 4 2 3 5 4 8 9 8 9 f(a,b,c,d) = AB f(a,b,c,d) = AD AB CD 3 2 AB CD 3 2 4 5 7 6 4 5 7 6 2 3 5 4 2 3 5 4 8 9 8 9 f(a,b,c,d) = B D f(a,b,c,d) = BD In each case, the horizontal term with against is omitted and the vertical term with against is omitted. Be sure to pay particular attention to the pattern with a in each corner, where A is omitted vertically and C is omitted horizontally. Note that each block of 4 contains 4 blocks of 2, but these are not diagrammed since they are absorbed (in contrast, the Quine-McCloskey method, which we won t look at until later, does keep tabs on all such blocks!). In general, an implicant (implicate for 's) is a term that is a product of inputs (including complements) for which the function evaluates to whenever the term evaluates to. These are represented by blocks of size 2n on K-maps.

A prime implicant (implicate for 's) is one not contained in any larger blocks of 's. Page 3 An essential prime implicant is a prime implicant containing a not covered by any other prime implicant. A distinguished cell is a -cell covered by exactly prime implicant. A don't care cell is one that may be either or for a particular circuit. The value used in K-map analysis is one which increases the amount of reduction. Don't care conditions occur because in circuits, there are often combinations of inputs that cannot occur, so we don't care whether their values are or. General Procedure for Circuit Reduction Using K-maps. Map the circuit's function into a K-map, marking don't cares by using dashes 2. Treating don't cares as if they were 's ('s for implicates), box in all prime implicants (implicates), omitting any consisting solely of dashes. 3. Mark any distinguished cells with * (dashes don't count) 4. Include all essential prime implicants in the sum, change their 's to dashes and remove their boxes - exit if there aren't any more 's at this point. 5. Remove any prime implicants whose 's are contained in a box having more 's (dominated case) if there is a case where the number of 's is the same (codominant case), discard the smaller box if there is a case where the number of 's is the same and the box sizes are the same, discard either. 6. Go back to step 3 if there are any new distinguished cells 7. Include the largest of the remaining prime implicants in the sum and go back to step 4 (this step is rarely needed) - if there is no largest, choose any 8. If step 7 was used, choose from among the possible sums the one with the fewest terms, then the one using the fewest variables. Remark: if this procedure is employed with the K-map AB CD 3 2 4 5 7 6 2 3 5 4 8 9 step 7 will be employed.