Copyright 2012 Pearson Education, Inc. Chapter 1 INTRODUCTION TO COMPUTING AND ENGINEERING PROBLEM SOLVING



Similar documents
CSCA0201 FUNDAMENTALS OF COMPUTING. Chapter 1 History of Computers

Outline. hardware components programming environments. installing Python executing Python code. decimal and binary notations running Sage

Machine Architecture and Number Systems. Major Computer Components. Schematic Diagram of a Computer. The CPU. The Bus. Main Memory.

Base Conversion written by Cathy Saxton

CHAPTER 1 ENGINEERING PROBLEM SOLVING. Copyright 2013 Pearson Education, Inc.

Topics. Introduction. Java History CS 146. Introduction to Programming and Algorithms Module 1. Module Objectives

1.1 Electronic Computers Then and Now

Engineering Problem Solving

Ch. 10 Software Development. (Computer Programming)

CSI 333 Lecture 1 Number Systems

1. Convert the following base 10 numbers into 8-bit 2 s complement notation 0, -1, -12

CSCI 4717 Computer Architecture. Function. Data Storage. Data Processing. Data movement to a peripheral. Data Movement

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

Lecture 11: Number Systems

EMC Publishing. Ontario Curriculum Computer and Information Science Grade 11

Let s put together a Manual Processor

An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008

1/20/2016 INTRODUCTION

Binary Numbers. Binary Octal Hexadecimal

CPU Organisation and Operation

Computer Organization

The programming language C. sws1 1

Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level

HOMEWORK # 2 SOLUTIO

A single register, called the accumulator, stores the. operand before the operation, and stores the result. Add y # add y from memory to the acc

Chapter 1: Digital Systems and Binary Numbers

Computer Programming. Course Details An Introduction to Computational Tools. Prof. Mauro Gaspari:

Section 1.4 Place Value Systems of Numeration in Other Bases

Licensed to: CengageBrain User

Binary Number System. 16. Binary Numbers. Base 10 digits: Base 2 digits: 0 1

Useful Number Systems

Instructor Özgür ZEYDAN BEU Dept. of Enve. Eng. CIV 112 Computer Programming Lecture Notes (1)

Chapter 2 Logic Gates and Introduction to Computer Architecture

Numbering Systems. InThisAppendix...

Lecture 2. Binary and Hexadecimal Numbers

2011, The McGraw-Hill Companies, Inc. Chapter 3

Subject knowledge requirements for entry into computer science teacher training. Expert group s recommendations

Introduction to Computers and Programming

Today. Binary addition Representing negative numbers. Andrew H. Fagg: Embedded Real- Time Systems: Binary Arithmetic

CPEN Digital Logic Design Binary Systems

Early Developments: From Difference Engine to IBM 701

Oct: 50 8 = 6 (r = 2) 6 8 = 0 (r = 6) Writing the remainders in reverse order we get: (50) 10 = (62) 8

Fall 2012 Q530. Programming for Cognitive Science

Chapter 1. The largest computers, used mainly for research, are called a. microcomputers. b. maxicomputers. c. supercomputers. d. mainframe computers.

Computer Science PLUS I Volume 1 : Concepts Government of Tamilnadu

Obj: Sec 1.0, to describe the relationship between hardware and software HW: Read p.2 9. Do Now: Name 3 parts of the computer.

3. Convert a number from one number system to another

Compilers. Introduction to Compilers. Lecture 1. Spring term. Mick O Donnell: michael.odonnell@uam.es Alfonso Ortega: alfonso.ortega@uam.

CHAPTER 4 MARIE: An Introduction to a Simple Computer

Programmable Logic Controllers Definition. Programmable Logic Controllers History

Science in History: From the Abacus to the Modern Computer Part 1: The Abacus

HARDWARE AND SOFTWARE COMPONENTS Students will demonstrate an understanding of the relationship between hardware and software in program execution.

To convert an arbitrary power of 2 into its English equivalent, remember the rules of exponential arithmetic:

Computer Science 217

Levent EREN A-306 Office Phone: INTRODUCTION TO DIGITAL LOGIC

Chapter 7 Lab - Decimal, Binary, Octal, Hexadecimal Numbering Systems

2 Number Systems. Source: Foundations of Computer Science Cengage Learning. Objectives After studying this chapter, the student should be able to:

Chapter 01: Introduction. Lesson 02 Evolution of Computers Part 2 First generation Computers

AQA GCSE in Computer Science Computer Science Microsoft IT Academy Mapping

ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture

Chapter Binary, Octal, Decimal, and Hexadecimal Calculations

Chapter 1. Dr. Chris Irwin Davis Phone: (972) Office: ECSS CS-4337 Organization of Programming Languages

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Chapter 2. Binary Values and Number Systems

ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER

Language Evaluation Criteria. Evaluation Criteria: Readability. Evaluation Criteria: Writability. ICOM 4036 Programming Languages

Systems I: Computer Organization and Architecture

Introduction to Computers and C++ Programming

Bachelors of Computer Application Programming Principle & Algorithm (BCA-S102T)

Introduction to Computers and C++ Programming

ADVANCED SCHOOL OF SYSTEMS AND DATA STUDIES (ASSDAS) PROGRAM: CTech in Computer Science

What is a programming language?

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C

ECE 0142 Computer Organization. Lecture 3 Floating Point Representations

Computer Basics: Chapters 1 & 2

Binary Representation. Number Systems. Base 10, Base 2, Base 16. Positional Notation. Conversion of Any Base to Decimal.

Solution for Homework 2

High level code and machine code

Management Challenge. Managing Hardware Assets. Central Processing Unit. What is a Computer System?

Administrative Issues

Chapter 12 Programming Concepts and Languages

CHAPTER 2: HARDWARE BASICS: INSIDE THE BOX

Central Processing Unit (CPU)

MACHINE ARCHITECTURE & LANGUAGE

3 SOFTWARE AND PROGRAMMING LANGUAGES

Computer Architecture Lecture 2: Instruction Set Principles (Appendix A) Chih Wei Liu 劉 志 尉 National Chiao Tung University

High-speed image processing algorithms using MMX hardware

COMPSCI 210. Binary Fractions. Agenda & Reading

Introduction to Computer Architecture Concepts

isppac-powr1220at8 I 2 C Hardware Verification Utility User s Guide

EE361: Digital Computer Organization Course Syllabus

Data Storage. 1s and 0s

TYPES OF COMPUTERS AND THEIR PARTS MULTIPLE CHOICE QUESTIONS

Charles Dierbach. Wiley

Figure 1: Graphical example of a mergesort 1.

CDA 3200 Digital Systems. Instructor: Dr. Janusz Zalewski Developed by: Dr. Dahai Guo Spring 2012

DNA Data and Program Representation. Alexandre David

MULTIPLE CHOICE FREE RESPONSE QUESTIONS

Transcription:

Chapter 1 INTRODUCTION TO COMPUTING AND ENGINEERING PROBLEM SOLVING

Outline Objectives 1. Historical Perspective 2. Recent Engineering Achievements 3. Computing Systems 4. Data Representation and Storage 5. An Engineering Problem-Solving Methodology

Objectives Introduce computing and engineering problem solving, including: A brief history Recent engineering achievements A discussion of Numbering Systems A discussion of hardware and software A five-step problem-solving methodology

Historical Perspective Augusta Ada Byron (1815-1852, below) wrote the first computer program. Charles Babbage, (1792-1871, above) designed the Analytical Engine (left) to process decimal numbers.

Charles Babbage, Esq. 1792-1871 English mathematician. Designed the Analytical Engine in the early 1800s. Published Of the Analytical Engine in 1864.

Analytical Engine Designed to process base ten numbers. Consisted of four parts: Storage unit Processing unit Input device Output device

Analytical Engine Luigi F. Menabrea, French engineer and mathematician, described Babbage s vision of a machine capable of solving any problem using: Inputs Outputs Programs written on punch cards

Augusta Ada Byron 1815-1852 Wrote the English translation of Menabrea s Sketch of the Analytical Engine. Envisioned the multidisciplinary potential of the Analytical Engine. Wrote detailed instructions for performing numerical computations using the Analytical Engine.

Digital Computers ABC (Atanasoff Berry Computer) Developed at Iowa State University between 1939 and 1942 by John Atanasoff and Clifford Berry. Weighed 700 pounds. Executed one instruction every 15 seconds.

Digital Computers ENIAC(Electronic Numerical Integrator And Calculator) Developed by research team lead by John Mauchly and J. Presper Eckert during the early 1940s. Weighed 30 tons. Executed hundreds of instructions every second.

ENIAC vs. Intel Pentium 4 ENIAC executes hundreds of operations per second (30 tons) Today s processors execute trillions of operations per second and weigh ounces.

Recent Engineering Achievements Image credits: NASA/JPL/Malin Space Science Systems.

Recent Engineering Achievements Extraterrestrial Explorations First manned lunar landing (July 21, 1969) Mars Global Surveyor, Mars Reconnaissance Orbiter, and Mars Exploration Rovers Terrestrial Application Satellites Computer Axial Tomography (CAT) Scanners Computer simulations Advanced composite materials. Speech understanding Weather, climate, and global change prediction

Recent Engineering Achievements Digital computers facilitate multidisciplinary engineering achievements that: Improve our lives. Expanded the possibilities for our future. Changing engineering environment requires engineers with: Communication skills. Skills for working in interdisciplinary teams. An awareness of ethic issues and environmental concerns. A global perspective.

Computing Systems The von Neumann Computing Model

Computing Systems A computing system is a complete working system that includes: Hardware Software

Hardware Hardware refers to the physical parts off the computing system that have mass (i.e. they can actually be touched): Computer Display Mouse Printer

Hardware Jon von Neumann computing model Input device(s) Output device(s) Memory Unit CPU (Central Processing Unit) consisting of: Control Unit ALU (Arithmetic Logic Unit)

Software Interface to Computer Hardware

Software Computer software refers to programs that reside and execute electronically on the hardware. Compilers Translate source code Operating systems Provide the HCI (Human Computer Interface) Application programs Provide problem solutions

Building a Program Computers only understand machine language. High-level languages like C++ must be translated to machine language for execution.

Key Terms Source Program printable/readable Program file Object Program nonprintable machine readable file Executable Program nonprintable executable code

Errors in Programs Syntax/Parse Errors Mistakes with the language. Always reported by the compiler Linking Errors Missing pieces prevent the final assembly of an executable program. Run-time Errors Occur when program is executing. May or may not be reported.

Logic Errors Can be difficult to find. Debugging can be time consuming. Better tools for find bugs It is important to carefully check the output of your programs for errors. Even programs that appear to work correctly may have bugs!

Debugging Process of eliminating logic errors (i.e. bugs) from programs. User-friendly programming environments such as Microsoft Visual C++ integrate the compiler with text processors and code editors special tools to help find bugs in programs (debugger) testing tools and much more

Data Representation and Storage 00110101001001001010101111101110 10101011111011100011010100100100 11000110110101011111001001001010 10101011111101001001000101110001 00100110111110101010001101010011 01001001001010101111101110001101 10100001101010010010111010011111

Data Representation and Storage Digital computers store information as a sequence of bits (binary digits). The value or state of a bit at any given time can be 0 or 1 (off or on). Data is stored as a sequence of bytes. A byte is a sequence of 8 bits.

Memory Diagram Address Sixteen Bit Word 000 0000101011011101 001 1010001011010100 010 1011010010100101 011 0101001101010101 010 0101000111001110 010 1100110000111010 110 0100011101001001 111 0101110001001000 Address Space = 8 Word Size = 16

Data Representation Right most bit is referred to as the least significant bit. Left most bit is referred to as the most significant bit. Value stored at address 000 is 0000101011011101 2 = 2781 10 But what does it represent?

Numbering Systems Base ten number system Ten decimal digits (0,1,2,3,4,5,6,7,8,9) Each digit multiplies a power of ten Example: 245 10 = 2*10 2 + 4*10 1 + 5*10 0

Numbering Systems Base two (binary) number system Two binary digits (0,1) Each digit multiplies a power of two Example: 10110 2 = 1*2 4 + 0*2 3 + 1*2 2 + 1*2 1 + 0*2 0 = 1*16 + 0*8 + 1*4 + 1*2 + 0*1 = 16 + 0 + 4 + 2 + 0 = 22 10

Numbering Systems Base eight number system Eight octal digits (0,1,2,3,4,5,6,7) Each digit multiplies a power of eight Example: 245 8 = 2*8 2 + 4*8 1 + 5*8 0 = 2*64 + 4*8 + 5*1 = 128 + 32 + 5 = 165 8

Numbering Systems Base sixteen number system Sixteen hex digits (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F) Each digit multiplies a power of sixteen Example: 2FB 16 = 2*16 2 + F*16 1 + B*16 0 = 2*256 + F*16 + B*1 = 512 + 240 + 11 = 763 10

Practice with Number Systems 100 2 =? 8 3716 8 =? 2 110100111 2 =? 10 3A1B 16 =? 2

Practice with Number Systems 100 2 = 4 8 3716 8 = 011 111 001 110 2 110100111 2 = 423 10 3A1B 16 = 0011 1010 0001 1011 2

Data Types Integer Data Type: Often represented in 4 bytes (System Dependent) Left most bit is reserved for the sign of the number Remaining 31 bits represent the magnitude of the number.

Data Types Representation of data affects the efficiency of arithmetic and logic operations. For efficiency, negative integers are often represented in their 2 s complement form. The 2 s complement of an integer is formed by negating all of the bits and adding one.

Two s Complement Form the 2 s complement representation for the value -127 10 assuming a word size of 8 bits for simplicity. 127 10 = 01111111 2 Negate bits: 10000000 Add 1: 10000001 2 s complement is 1000 0001 2

Two s Complement Add 127 10 to -127 10 01111111 2 127 10 + 10000001 2 + -127 10 = 00000000 2 = 0 10

Data Types Floating Point Data Floating point types represent real numbers, such as 1.25, that include a decimal point. Digits to the right of the decimal point form the fractional part of the number. Digits to the left of the decimal point form the integral part of the number.

Practice with Decimals Convert 12.25 10 to binary.

Practice with Decimals Convert 12.25 10 to binary. First convert the integer part: 12 10 =1100 2 Then repeatedly multiply the fractional part by 2:.25*2=0.5C0.50*2=1.0C1 Therefore: 12.25 10 =1100.01 2

Engineering Problem-Solving Methodology

Five Step Problem-Solving Methodology 1. State the problem clearly. 2. Describe the input and output. 3. Work a hand example. 4. Develop a solution. 5. Test your solution.