Introduction to Predicate Calculus

Similar documents
2. The Language of First-order Logic

Predicate Logic. For example, consider the following argument:

Predicate Logic. Example: All men are mortal. Socrates is a man. Socrates is mortal.

Invalidity in Predicate Logic

Relational Calculus. Module 3, Lecture 2. Database Management Systems, R. Ramakrishnan 1

Mathematical Induction

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs

Likewise, we have contradictions: formulas that can only be false, e.g. (p p).

3. Mathematical Induction

The Refutation of Relativism

Lecture 16 : Relations and Functions DRAFT

Algebra I Notes Relations and Functions Unit 03a

Handout #1: Mathematical Reasoning

Propositional Logic. A proposition is a declarative sentence (a sentence that declares a fact) that is either true or false, but not both.

The Syntax of Predicate Logic

(LMCS, p. 317) V.1. First Order Logic. This is the most powerful, most expressive logic that we will examine.

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

How To Proofread

Basic Set Theory. 1. Motivation. Fido Sue. Fred Aristotle Bob. LX Semantics I September 11, 2008

Independent samples t-test. Dr. Tom Pierce Radford University

CHAPTER 2. Logic. 1. Logic Definitions. Notation: Variables are used to represent propositions. The most common variables used are p, q, and r.

Lecture 13 of 41. More Propositional and Predicate Logic

Chapter 7. Functions and onto. 7.1 Functions

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Math 223 Abstract Algebra Lecture Notes

Rigorous Software Development CSCI-GA

A Few Basics of Probability

An Innocent Investigation

First-order logic. Chapter 8. Chapter 8 1

7 Relations and Functions

CHAPTER 7 ARGUMENTS WITH DEFIITIONAL AND MISSING PREMISES

Logic in general. Inference rules and theorem proving

Cartesian Products and Relations

ML for the Working Programmer

Predicate Logic Review

Regions in a circle. 7 points 57 regions

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

Lecture 1. Basic Concepts of Set Theory, Functions and Relations

Instructor (Brad Osgood)

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

Rules of Inference Friday, January 18, 2013 Chittu Tripathy Lecture 05

ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS

CHAPTER 7 GENERAL PROOF SYSTEMS

Solutions Q1, Q3, Q4.(a), Q5, Q6 to INTLOGS16 Test 1

8 Primes and Modular Arithmetic

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

The theory of the six stages of learning with integers (Published in Mathematics in Schools, Volume 29, Number 2, March 2000) Stage 1

Chapter 1. Use the following to answer questions 1-5: In the questions below determine whether the proposition is TRUE or FALSE

6.080/6.089 GITCS Feb 12, Lecture 3

(Refer Slide Time: 2:03)

Mathematical Induction. Lecture 10-11

5 Homogeneous systems

Predicate logic Proofs Artificial intelligence. Predicate logic. SET07106 Mathematics for Software Engineering

Multiplication Rules! Tips to help your child learn their times tables

Schedule. Logic (master program) Literature & Online Material. gic. Time and Place. Literature. Exercises & Exam. Online Material

Sentence Semantics. General Linguistics Jennifer Spenader, February 2006 (Most slides: Petra Hendriks)

CS510 Software Engineering

Formalization of the CRM: Initial Thoughts

Pigeonhole Principle Solutions

If an English sentence is ambiguous, it may allow for more than one adequate transcription.

But have you ever wondered how to create your own website?

1.7 Graphs of Functions

A Beginner s Guide to Modern Set Theory

Math 3000 Section 003 Intro to Abstract Math Homework 2

>> My name is Danielle Anguiano and I am a tutor of the Writing Center which is just outside these doors within the Student Learning Center.

Math 4310 Handout - Quotient Vector Spaces

Basic Concepts of Set Theory, Functions and Relations

Writing Thesis Defense Papers

Formal Logic, Algorithms, and Incompleteness! Robert Stengel! Robotics and Intelligent Systems MAE 345, Princeton University, 2015

[Refer Slide Time: 05:10]

Basic Proof Techniques

Appendix 1: Adaptable Templates

Solving Rational Equations

BBC LEARNING ENGLISH 6 Minute English Do you think for yourself?

A PRACTICAL GUIDE TO db CALCULATIONS

Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

INTRODUCTORY SET THEORY

Simple Regression Theory II 2010 Samuel L. Baker

Logic and Reasoning Practice Final Exam Spring Section Number

CSL105: Discrete Mathematical Structures. Ragesh Jaiswal, CSE, IIT Delhi

Formalization of the CRM: Initial Thoughts

Polynomial Invariants

So let us begin our quest to find the holy grail of real analysis.

Click on the links below to jump directly to the relevant section

def: An axiom is a statement that is assumed to be true, or in the case of a mathematical system, is used to specify the system.

A Short Course in Logic Example 8

NF5-12 Flexibility with Equivalent Fractions and Pages

Logic is a systematic way of thinking that allows us to deduce new information

Computational Logic and Cognitive Science: An Overview

Copyright 2012 MECS I.J.Information Technology and Computer Science, 2012, 1, 50-63

2. Abstract State Machines

Principles of Modeling: Real World - Model World

How To Understand The Theory Of Computer Science

Outline. Written Communication Conveying Scientific Information Effectively. Objective of (Scientific) Writing

Lesson 26: Reflection & Mirror Diagrams

Jaakko Hintikka Boston University. and. Ilpo Halonen University of Helsinki INTERPOLATION AS EXPLANATION

A: I thought you hated business. What changed your mind? A: MBA's are a dime a dozen these days. Are you sure that is the best route to take?

Common sense, and the model that we have used, suggest that an increase in p means a decrease in demand, but this is not the only possibility.

This asserts two sets are equal iff they have the same elements, that is, a set is determined by its elements.

Introduction to tuple calculus Tore Risch

Transcription:

Introduction to Predicate Calculus 1 Motivation Last time, we discuss the need for a representation language that was declarative, expressively adequate, unambiguous, context-independent, and compositional. We looked to logic for help. A logic has a language and an inference procedure. A language in turn has a synta which includes a vocabulary and a set of connectives. A language also has a semantics: We get to say what the ordinary vocabulary symbols mean, but the connectives has a fixed, truth-conditional meaning. (In some traditions, truth conditions is taken to be the same thing as meaning. This can be generalized to conditions of satisfaction for things other than statements. This seems to be a very limited theory of meaning, though.) Our first logic was propositional calculus. This has symbols (denoting propositions), some logical connectives (usually (or), (and), (not), and (implies)). Sentences are either symbols or sentences joined by logical connectives. The inference procedure is usually just modus ponens. Keep in mind that logic doesn't do anything: A proof theory tells us what a valid inference is, but it doesn't tell us which ones to make when. A problem with propositional calculus, we said, is that if isn't expressive enough. For example, suppose we want to capture the knowledge that Anyone standing in the rain will get wet. and then use this knowledge. For example, suppose we also learn that Jan is standing in the rain. We'd like to conclude that Jan will get wet. But each of these sentences would just be a represented by some proposition, say P, Q and R. What relationship is there between these propositions? Yes we can say P Q R Then, given P Q, we could indeed conclude R. But now, suppose we were told Pat is standing in the rain. We'd like to be able to conclude that Pat will get wet, but nothing we have stated so far will help us do this. The problem, it seems, is that we aren't able to represent any of the details of these propositions. It's the internal structure of these propositions that make the reasoning valid. But in propositional calculus we don't have anything else to talk about besides propositions!

2 Predicate Calculus Thus, even for very simply kinds of reasoning, we'd like a richer logic. The first thing we need to think about is what kinds of things there are in the world to talk about. We make a basic dichotomy: Things (or entities or objects, or what have you). This category includes people, numbers, unicorns, actions, events, places,... We'll call these objects. Things you can be said about things. This includes large, larger than, are brother and sister, squeamish, above, Republican, cost... We'll call these relations. Note that: You might think it is funny to call something like large a relation, since it's not relating anything to anything else. Instead, it seems more natural to call such things properties. We'll simply consider a property a kind of degenerate relation. Semantically, we'll usually talk about relations (and properties, of course) in terms of sets. E.g., large is supposed to be the set of large things; above is the set of ordered pairs of things that are above other things. This is a strange way of thinking about the world, and indeed, there are all sorts of problems with it, but it'll do for now. Exactly what goes where is up to you. For example, we might decide that colors, e.g., red, white and blue are properties. On the other hand, we could also think of them as objects: Then we might have a property being a color and a relation is the color that apply to color objects. Or maybe there are color objects and color properties. These issues aren't generally thought of as issues of logic so much as issues of ontology. A lot of knowledge representation is about exactly such decisions. If something is a relationship, does that mean it is not an object? Why would we care? Well, one implication is whether we can say things about them. More on this later. We can confuse this clear picture a little by introducing functions. A function is just a relation in which we can specify n-1 members of a tuple can guarantee that there is only one tuple with those members. For example, we might have a relation mother which lists all tuples pair a mother and child. Since there is (presumabl just one mother, the relation is a function from the point of view of the child element of the tuple, but of course, not from the point of view of the mother. 3 Syntax Okay, so that's what there is in the universe, more or less. What's our language? Symbols - Object symbols, e.g., John1, x. - - Predicate symbols, e.g, Tall, >, Cost. Note that predicates have an arity, i.e., something is a predicate of some fixed number of arguments. - - Function symbols. These look pretty much like predicates, and indeed, they have an arity.

Each of these guys comes in two flavors, a constant and a variable. For the time being, though, we'll only worry about variables for objects, and assume that all predicate and functions symbols are constants. Terms A term is either a symbol, or a functional expression (or function application). A functional expression is a function symbol and a bunch of terms. (Intuitively, the functions is applied to the terms. But remember that the functions here don't do anything; they just denote. Also, it is most common to write the function symbol, followed by all the arguments in parenthesis. So, for example, we might have Mother-of(Jan) Age-of(Jan) Age-of(Mother-of(Jan)) Arctan(π) +(2,3) In LISP-style we write: (Mother-of Jan) (Arctan π) but this is just syntax. You can think about a term as a possibly complex name. Sentences -- Terms are combined into sentences, or well-formed formulas. We can think of sentences as asserting things, i.e., they can have a truth value. There are several kinds: - Atomic sentences (atoms): A relation and some terms. Brothers(John,Bill) Republican(Bush) <(2,3) +-rel(2,3,5) Brothers(Father-of(Jan),Lou) (Note: Sometimes we take liberties with the syntax and write certain predicates using in-fix notation: 2<3 Joe=MrSmith Of course, this isn't true for LISP-style notation.) - Logical sentences: Logical combinations of other sentences, with the usual logical connectives. Brothers(Pat,Sue) <(2,3) +-rel(2,3,5) Brothers(Pat,Sue) <(2,3) Removed(Clinton) (Uncle(RW,monkey1) \& Monkey(monkey1)) Brothers(John,Bill) Brothers(Bill,John) Variables are allowed to appear in any of these. For example, it is perfectly okay to say Brothers(John,x)

Intuitively, we don't know whether this is true of false until we find out what x is supposed to be. (This is a semantic issue -- more on this later.) - Quantified sentences: We introduce quantifiers, which are things that help us interpret variables. Usually, there are two kinds, existential and universal. We stick these in front of another sentence: x (<(2) <(3)) x (Brothers(John,x)) Quantifiers have a scope. For example, consider the logical sentence x (Brothers(John,x)) x (Uncle(John,x)) The two different x's have nothing to do with each other. If a variable is enclosed with a quantifier, it is bound; otherwise, it is free. Quantifiers apply only to otherwise free variables. I.e., even if I changed the parentheses above, that won't effect the way we interpret the variables in the sentence (although it would be more confusing). Scope becomes more interesting when we build up more interesting, more complex sentences: x y (Brothers((Father-of()) Uncle( Note that here, it wouldn't make any difference if we put the y on the outside. So it was okay not to bother with parentheses. But in general, this isn't true: x y (Loves() Everybody loves somebody. y x (Loves() There's somebody who everyone loves. In general, order doesn't make any difference in a series of the same quantifier, but it matters a lot where the quantifiers change. A sentence with no free variables is a closed sentence; a sentence with no variables at all is a ground sentence. We will only allow quantification over objects, not over functions and predicates. A language with this restriction is called a first-order language. For example, the following sentence seems to require quantification over predicates: Everything that applied to Russia under the Czars applied to Russia under the Communists. Why not allow more expressiveness? Certain nice formal properties that FOPC has isn't shared by second order languages. More on this latter. Some things to think about: If we write Brothers(John,Bill), is it automatically follow that Brothers(Bill,John)?

What other kinds of quantifiers might there be? (Generalized quantifiers, e.g.: just about everyone, most ) 4 Semantics But what do any of these things mean? We can be more formal about this, but, intuitively, we define an interpretation as follows: A Universe of Discourse with objects, relations, and functions A mapping of constants to these elements We can interpret terms in the obvious way, given an interpretation: They denote the unique object described by the function. For example, if Father-of is the FATHER function, and Neil is some person and in the FATHER function, it says that GEORGE is in this relation to NEIL, then Father-of(Neil) denotes or designates GEORGE. We say that a sentence is satisfied, under an interpretation, if the sentence is true relative to the interpretation, i.e., it corresponds to something true in our universe of discourse. It is easy to see what to do with ground sentences (sentences with no variables). We can build up a recursive definition: An atomic formula is true if the specified relation holds; logical sentences are true as a truth-theoretical function of their parts. (There is a special case, though. There is a relation called equality that syntactically looks like any other relation. However, it is interpretated as true if the two terms asserted to be equal designate the same element of the universe.) But what about those quantifiers? We'll need another function, called a variable assignment, that maps variables to constants in the universe. For sentences with open variables, it only makes sense to ask if they are satisfied (in a given interpretation) with a particular variable assignment. For variables in the scope of a quantifier, though, we can specify more useful rules: For a universally quantified sentence, the sentence is satisfied if the enclosed sentence is true for all variable assignments. For an existentially quantified sentence, the sentence is satisfied if the enclosed sentence is true for some variable assignment. If an interpretation satisfies a sentence for all variable assignments, then it is called a model. (This is equivalent to treating free variables as if they were universally quantified.) A sentence is satisfiable if and only if there is some interpretation that satisfies it.

A sentence is valid if and only if all interpretations satisfy it. (I.e., it is a tautology.) A sentence is unsatisfiable if and only if there are no interpretations that satisfy it. These are out and out internal contradictions; i.e., they have inconsistent subparts. Some examples: x y >(y,x) is satisfied by some interpretation, not by others. x P(x) x P(x) happens to be true in all interpretations. x x=x For example, the sentence x Loves(Jan) is satisfied by an interpretation in which, say, all the individuals are {LYNN,JAN}, and LOVES is the relation {<LYNN,JAN>,<JAN,JAN>}, and Jan is mapped to JAN and Loves is mapped to LOVES (but not, say, in the model where Jan is mapped to LYNN). Another example: x(p(x) (F(x) = G(F(H(x))))) Here's an interpretation: Domain is the natural numbers H(x) is the predecessor function (i.e., it subtracts 1) G( be the product function P(x) is the predicate >0 F(x) is?? (factorial!) Alternatively, suppose Domain is all lists H(x) is CDR G( APPEND to the second argument the (list of the) head of the first P(x) is the predicate non-nil F(x) is REVERSE That is, the equation states that for every list if x not equal to nil then REVERSE(x)=APPEND-HEAD-TO- SECOND(REVERSE(CDR(x)))

Note that we have not yet specified how we reason with this language. First, let us look at how we might represent things in it. 5 Advantages Over Propositional Calculus Let s see have predicate calculus helps. Instead of just propositions, we can make very general statements, such as the following: y( Clear( x) y, z( z) y( Block( Move( Table) Table) Block( Clear( x) Clear( Move( Clear( ) ) z)) So, the advantage here is that we have a much smaller representation. Note, also, that we don t have to add any rules if we just add another block, say. Our representation doesn t grow much as the problem gets bigger.