2 Elements of classical computer science



Similar documents
Introduction to computer science

Chapter 1. Computation theory

Computational complexity theory

Keywords Quantum logic gates, Quantum computing, Logic gate, Quantum computer

Quantum Computing. Robert Sizemore

Notes on Complexity Theory Last updated: August, Lecture 1

Chapter 2: Boolean Algebra and Logic Gates. Boolean Algebra

CPSC 211 Data Structures & Implementations (c) Texas A&M University [ 313]

Quantum and Non-deterministic computers facing NP-completeness

Turing Machines, Part I

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Universality in the theory of algorithms and computer science

CS 3719 (Theory of Computation and Algorithms) Lecture 4

6.080/6.089 GITCS Feb 12, Lecture 3

MATH 102 College Algebra

Diagonalization. Ahto Buldas. Lecture 3 of Complexity Theory October 8, Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach.

The Classes P and NP

Complexity Classes P and NP

NP-complete? NP-hard? Some Foundations of Complexity. Prof. Sven Hartmann Clausthal University of Technology Department of Informatics

NP-Completeness and Cook s Theorem

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

Computability Theory

Introduction to Algorithms. Part 3: P, NP Hard Problems

P versus NP, and More

Hilberts Entscheidungsproblem, the 10th Problem and Turing Machines

Mathematics Placement Packet Colorado College Department of Mathematics and Computer Science

Tetris is Hard: An Introduction to P vs NP

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

FORDHAM UNIVERSITY CISC Dept. of Computer and Info. Science Spring, The Binary Adder

The Temple of Quantum Computing. by Riley T. Perry Version April 29, 2006

Introduction to Logic in Computer Science: Autumn 2006

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

CAs and Turing Machines. The Basis for Universal Computation

OHJ-2306 Introduction to Theoretical Computer Science, Fall

CHAPTER 3 Boolean Algebra and Digital Logic

Ian Stewart on Minesweeper

Bits Superposition Quantum Parallelism

C H A P T E R. Logic Circuits

Downloaded from equations. 2.4 The reciprocal function x 1 x

Offline 1-Minesweeper is NP-complete

Zeros of Polynomial Functions. The Fundamental Theorem of Algebra. The Fundamental Theorem of Algebra. zero in the complex number system.

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

Lecture 7: NP-Complete Problems

5.2 Inverse Functions

Mathematical goals. Starting points. Materials required. Time needed

3515ICT Theory of Computation Turing Machines

Logic in Computer Science: Logic Gates

PROPERTIES OF ELLIPTIC CURVES AND THEIR USE IN FACTORING LARGE NUMBERS

Core Maths C2. Revision Notes

CoNP and Function Problems

Breaking The Code. Ryan Lowe. Ryan Lowe is currently a Ball State senior with a double major in Computer Science and Mathematics and

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

Turing Machines: An Introduction

Slope-Intercept Form and Point-Slope Form

Theory of Automated Reasoning An Introduction. Antti-Juhani Kaijanaho

Computational Models Lecture 8, Spring 2009

Quantum Monte Carlo and the negative sign problem

Quantum Computing Lecture 7. Quantum Factoring. Anuj Dawar

1. Nondeterministically guess a solution (called a certificate) 2. Check whether the solution solves the problem (called verification)

Factoring & Primality

Section V.2: Magnitudes, Directions, and Components of Vectors

Quantum Computing Architectures

Notes on Complexity Theory Last updated: August, Lecture 1

Lecture 2: Universality

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

D.2. The Cartesian Plane. The Cartesian Plane The Distance and Midpoint Formulas Equations of Circles. D10 APPENDIX D Precalculus Review

On the Relationship between Classes P and NP

Some Minesweeper Configurations

FACTORING LARGE NUMBERS, A GREAT WAY TO SPEND A BIRTHDAY

3. Mathematical Induction

THE POWER RULES. Raising an Exponential Expression to a Power

Faster deterministic integer factorisation

DIFFERENTIAL EQUATIONS

Hypercomputation: computing more than the Turing machine

SECTION 10-2 Mathematical Induction

Factoring bivariate polynomials with integer coefficients via Newton polygons

MATHEMATICS: CONCEPTS, AND FOUNDATIONS Vol. III - Logic and Computer Science - Phokion G. Kolaitis

One last point: we started off this book by introducing another famously hard search problem:

The P versus NP Solution

The finite field with 2 elements The simplest finite field is

Quantum Computability and Complexity and the Limits of Quantum Computation

Chapter 1. NP Completeness I Introduction. By Sariel Har-Peled, December 30, Version: 1.05

Shor s algorithm and secret sharing

y 1 x dx ln x y a x dx 3. y e x dx e x 15. y sinh x dx cosh x y cos x dx sin x y csc 2 x dx cot x 7. y sec 2 x dx tan x 9. y sec x tan x dx sec x

Lecture 19: Introduction to NP-Completeness Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

Oracle Turing machines faced with the verification problem

8 Primes and Modular Arithmetic

CHAPTER 2. Logic. 1. Logic Definitions. Notation: Variables are used to represent propositions. The most common variables used are p, q, and r.

Classical and Quantum Logic Gates:

MAT188H1S Lec0101 Burbulla

Polynomials. Jackie Nicholas Jacquie Hargreaves Janet Hunter

(IALC, Chapters 8 and 9) Introduction to Turing s life, Turing machines, universal machines, unsolvable problems.

QUANTUM INFORMATION, COMPUTATION AND FUNDAMENTAL LIMITATION

2.1 Complexity Classes

What Has Quantum Mechanics to Do With Factoring? Things I wish they had told me about Peter Shor s algorithm

Lecture 8: Synchronous Digital Systems

1. True or False? A voltage level in the range 0 to 2 volts is interpreted as a binary 1.

arxiv:quant-ph/ v2 25 Jan 1996

Transcription:

2 Elements of classical computer science 11 Good references for the material of this section are [3], Chap. 3, [5], Secs. 2 and 3, [7], Sec. 6.1, and [11] as a nice readable account of complexit with some more details than we will treat (and need) here. 2.1 Bits of Histor The inventor of the first programmable computer is probabl Charles Babbage (1791-1871). He was interested in the automatic computation of mathematical tables and designed the mechanical analtical engine in the 1830s. The engine was to be controlled and programmed b punchcards (a technique alread known from the automatic Jacquard loom), but was never actuall built. Babbage s unpublished notebooks were discovered in 1937 and the 31-digit accurac Difference Engine No. 2 was built to Babbage s specifications in 1991. (Babbage was also Lucasian professor of mathematics in Cambridge, like Newton, Stokes, Dirac, and Hawking, and he invented the locomotive cowcatcher.) The first computer programmer probabl is Ada Augusta King, countess of Lovelace (1815-1852), daughter of the famous poet, Lord Bron, who devised a programme to compute Bernoulli numbers (recursivel) with Babbage s engine. From this example we learn that the practice of writing programmes for not-et existing computers is older than the quantum age. Another important figure from 19th centur Britain is George Boole (1815-1864) who in 1847 published his ideas for formalizing logical operations b using operations like AND, OR, and NOT on binar numbers. Alan Turing (1912-1954) invented the Turing machine in 1936 in the context of the decidabilit problem posed b David Hilbert: Is it alwas possible to decide whether a given mathematical statement is true or not? (It is not, and Turing s machine helped show that.) For further details on the histor of computing consult the histor section of the entr Computers in the Encclopaedia Britannica (unfortunatel no longer available for free on the web). 2.2 Boolean algebra and logic gates Computers manipulate bit strings. An transformation on n bits can be decomposed into one- and two-bit operations. There are onl two one-bit operations: identit and NOT. A logic gate is a one-bit function of a two-bit argument: (x, ) f(x, ) where x,, f = 0 or 1. The four possible inputs 00, 01, 10, 11 can be mapped to two possible outputs 0 and 1 each; there are 2 4 = 16 possible output strings of 4 smbols and thus there are 16 possible logic gates or Boolean functions. Note that these gates are irreversible since the output is shorter than the input. Before we discuss these functions we will recall some elements of standard Boolean logic. Boolean logic knows two truth values, true and false, which we identif with 1 and 0, respectivel, and the elementar operations NOT, OR, and AND from which all other operations and relations like IMPLIES or XOR can be constructed. The binar operations OR and AND are defined b their truth tables Note that, for example x x OR x AND 0 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 x XOR = (x OR ) AND NOT (x AND ). ( XOR is often denoted b, because it is equivalent to addition modulo 2.) We now return to the 16 Boolean functions of two bits. We number them according to the four-bit output string as given in the above truth table, read from top to bottom and interpreted as a binar number. Example: AND outputs 0001=1 and OR outputs 0111=7. We can thus characterize each gate (function) b a number between 0 and 15 and discuss them in order: 0: The absurdit, for example (x AND ) AND NOT (x AND ). 1: x AND

12 2: x AND ( NOT ) 4: ( NOT x) AND 8: ( NOT x) AND ( NOT ) = (x NOR ) 3: x, which can be written in a more complicated wa: x = x OR ( AND NOT ) 5: =...(see above) 9: (( NOT x) AND ( NOT )) OR (x AND ) = NOT (x XOR ) = (x EQUALS ) All others can be obtained b negating the above; notable are 15: The banalit, for example (x AND ) OR NOT (x AND ). 14: NOT (x AND ) = x NAND 13: NOT (x AND ( NOT )) = x IMPLIES We have thus seen that all logic gates can be constructed from the elementar Boolean operations. Furthermore, since x OR = ( NOT x) NAND ( NOT ), we see that we onl need NAND and NOT to achieve an desired classical logic gate. IN GATE OUT Figure 2.1: A logic gate. If we picture logic gates as elements with two input lines and one output line and connect these logic gates b wires through which we feed the input, we get what is called the network model of computation. 2.2.1 Minimum set of irreversible gates We note that x NAND = NOT (x AND ) = ( NOT x) OR ( NOT ) = 1 x (x, = 0, 1). If we can cop x to another bit, we can use NAND to achieve NOT: x NAND x = 1 x 2 = 1 x = NOT x (where we have used x 2 = x for x = 0, 1), or, if we can prepare a constant bit 1: We can express AND and OR b NAND onl: and x NAND 1 = 1 x = NOT x. (x NAND ) NAND (x NAND ) = 1 (1 x) 2 = 1 (1 2x + x 2 2 ) = 1 (1 x) = x = x AND (x NAND x) NAND ( NAND ) = ( NOT x) NAND ( NOT ) = 1 (1 x)(1 ) = x + x = x OR. Thus NAND and COPY are a universal set of (irreversible) gates. It is also possible to use NOR in place of NAND. In fact, NAND and COPY can be performed b the same two-bit to two-bit gate, if we can prepare a bit in state 1. This is the NAND/NOT gate: (x, ) (1 x, 1 x) = ( NOT x, x NAND ). The NOT and NAND functions are obviousl achieved b ignoring one output bit. For =1 we obtain COPY, combined with a NOT which can be inverted.

13 2.2.2 Minimum set of reversible gates Although we know how to construct a universal set of irreversible gates there are good reasons to stud the reversible alternative. Firstl, quantum gates are reversible, and secondl, it is interesting to discuss the possibilit of reversible (dissipationless) computation. A reversible computer evaluates an invertible n-bit function of n bits. Note that ever irreversible function can be made reversible at the expense of additional bits: is replaced b x(n bits) f(m bits) (x, n times 0) (x, f)(n + m bits). The possible reversible n-bit functions are the permutations of the 2 n possible bit strings; there are (2 n )! such functions. For comparison, the number of arbitrar n-bit functions is (2 n ) (2n). The universal gate set for reversible (classical) computation will have to include three-bit gates, as discussed in [7]. The number of reversible 1,- 2-, and 3-bit gates is 2, 24, and 40320, respectivel. One of the more interesting reversible two-bit gates is the CNOT, also known as reversible XOR : (x, ) (x, x XOR ), that is x x x XOR 0 0 0 0 0 1 0 1 1 0 1 1 1 1 1 0 The reversibilit is caused b the storage of x. The square of this gate is IDENTITY. XOR is often smbolized b. The CNOT gate can be used to cop a bit, because it maps (x, 0) (x, x). The combination of three XOR gates in Fig. 2.2 achieves a SWAP: x x x x XOR x Figure 2.2: Left: Single CNOT gate. Right: SWAP gate (x, ) (x, x XOR ) ((x XOR ) XOR x, x XOR ) (, XOR (x XOR )) = (, x) Thus the reversible XOR can be used to cop and move bits around. We will show now that the functionalit of the universal NAND/NOT gate discussed above can be achieved b adding a three-bit gate to our toolbox, the Toffoli gate θ (3), also known as controlled-controlled-not, which maps (x,, z) (x,, z XOR x). This gate is universal, provided we can provide fixed input bits and ignore output bits: For z = 1 we have (x,, 1) (x,, 1 x) = (x,, x NAND ). For x = 1 we obtain z XOR which can be used to cop, swap, etc. For x = = 1 we obtain NOT.

14 Thus we can do an computation reversibl. In fact it is even possible to avoid the dissipative step of memor clearing (in principle): store all garbage which is generated during the reversible computation, cop the end result of the computation and then let the computation run backwards to clean up the garbage without dissipation. This ma save some energ dissipation, but it has a price (as compared to reversible computation with final memor clearing): The time (number of steps) grows from T to roughl 2T. Additional storage space growing roughl T is needed. However, there are was [7] to split the computation up in a number of steps which are inverted individuall, so that the additional storage grows onl log T, but in that case more computing time is needed. Note that we have used the possibilit to COPY bits without much thought. In quantum computation we will have to do without COPYing due to the no cloning theorem! 2.3 The Turing machine as a universal computer tape 0 0 0 1 1 0 1 1... 0 0 1 1 0 1 0 1 1 1 head state indicator a b c......h (halt) Figure 2.3: A Turing machine operating on a tape with binar smbols and possessing several internal states, including the halt state The Turing machine acts on a tape (or string of smbols) as an input/output medium. It has a finite number of internal states. If the machine reads the binar smbol s from tape while being in state G, it will replace s b another smbol s, change its state to G and move the tape one step in direction d (left or right). The machine is completel specified b a finite set of transition rules (s, G) (s, G, d) The machine has one special internal state, the halt state, in which the machine stops all further activit. On input the tape contains the program and input data, on output the result of the computation. The (finite) set of transition rules for a given Turing machine T can be coded as a binar number d[t ] (the description of T ). Let T (x) be the output of T for a given input tape x. Turing showed that there exists a universal Turing machine U with U(d[T ], x) = T (x) and the number of steps U needs to simulate each step of T is onl a polnomial function of the length of d[t ]. Thus we onl have to suppl the description d[t ] of T and the original input x on a tape to U and U will perform the same task as an machine T, with at most polnomial slowdown. Other models of computation (for example the network model) are computationall equivalent to the Turing model: the same tasks can be performed with the same efficienc. Church [12] and Turing [13] stated the Church-Turing thesis: Ever function which would naturall be regarded as computable can be computed b the universal Turing machine. ( Computable functions comprise an extremel broad range of tasks, such as text processing etc.) There is no proof of this thesis, but also no counterexample, despite man attempts to find one. 2.4 Complexit and complexit classes We will not discuss man examples of problems from the different complexit classes. The article b Mertens [11] gives several examples in eas-to-read stle.

15 Consider some task to be performed on an input (integer, for simplicit) number x, for example finding x 2 or determining if x is a prime. The number of bits needed to store x is L = log 2 x. The computational complexit of the task characterizes how fast the number s of steps a Turing machine needs to solve the problem increases with L. Example: the method b which most of us have computed squares of large numbers in primar school has roughl s L 2 (if ou identif s with the number of digits ou have to write on our sheet of paper). This is a tpical problem from class P : there is an algorithm for which s is a polnomial function of L. If s rises exponentiall with L the problem is considered hard. (Note, however, that it is not possible to exclude the discover of new algorithms which make problems tractable!) It is often much easier to verif a solution than to find it; think of factorizing large numbers. The class N P consists of problems for which solutions can be verified in polnomial time. Of course P is contained in NP, but it is not known if NP is actuall larger than P, basicall because revolutionar algorithms ma be discovered an da. NP means non-deterministic polnomial. A non-deterministic polnomial algorithm ma branch into two paths which are both followed in parallel. Such a tree-like algorithm is able to perform an exponential number of calculational steps in polnomial time (at the expense of exponentiall growing parallel computational capacit!). To verif a solution, however, one onl has to follow the right branch of the tree and that is obviousl possible in polnomial time. Some problems ma be reduced to other problems, that is, the solution of a problem P 1 ma be used as a step or subroutine in an algorithm to solve another problem P 2. Often it can be shown that P 2 ma be solved b appling the subroutine P 1 a polnomial number of times; then P 2 is polnomiall reducible to P 1 : P 2 P 1 (Read: P 2 cannot be harder than P 1. ) Some nice examples are provided b problems from graph theor, where one is looking for paths with certain properties through a given graph (or network), see [11]. A problem is called NP-complete if an NP problem can be reduced to it. Hundreds of NP-complete problems are known, one of the most famous being the traveling salesman problem. If somebod finds a polnomial solution for an NP-complete problem, then P=NP and one of the most fundamental problems of theoretical computer science is solved. This is, however, ver unlikel, since man clever people have in vain tried to find such a solution. (It should be noted at this point that theoretical computer science bases its discussion of complexit classes on worst case complexit. In practical applications it is ver often possible to find excellent approximations to, sa, the traveling salesman problem within reasonable time.) A famous example for a hard problem is the factoring problem alread discussed in the preceding chapter. Some formulas, numbers, and references on this problem and its relation to crptograph can be found in section 3.2 of Steane s review [5]. Since the invention of Shor s [10] quantum factorization algorithm suspicions have grown that the factoring problem ma be in class NPI (I for intermediate), that is, harder than P, but not NP-complete. If this class exists, P NP. Some functions ma be not just hard to compute but uncomputable because the algorithm will never stop, or, nobod knows if it will ever stop. An example is the algorithm while x is equal to the sum of two primes, add 2 to x, otherwise print x and halt, beginning at x = 8. If this algorithm stops, we have found a counterexample to the famous Goldbach conjecture. Another famous unsolvable problem is the halting problem, which is stated ver easil: Is there a general algorithm to decide if a Turing machine T with description (transition rules) d[t ] will stop on a certain input x? There is a nice argument b Turing showing that such an algorithm does not exist. Suppose such an algorithm existed. Then it would be possible to make a Turing machine T H which halts if and onl if T (d[t ]) (that is, T, fed its own description) does not halt: T H (d[t ]) halts T H (d[t ]) does not halt (This is possible since the description d[t ] contains sufficient information about the wa T works.) Now feed T H the description of itself, that is, put T = T H T H (d[t H ]) halts T H (d[t H ]) does not halt. This contradiction shows that there is no algorithm that solves the halting problem. (Or that ever Turing machine halts, fed its own description. That must be excluded somehow.) This is a nice recursive argument: let an algorithm find out something about its own structure. This kind of reasoning is tpical of the field centered around Gödel s incompleteness theorem; a ver interesting literar piece of work centered about the ideas of recursiveness and self-reference (in mathematics and other fields of culture) is the book Gödel, Escher, Bach [14] b the phsicist/computer scientist Douglas R. Hofstadter.