An Inquiry Dialogue System


 Osborne Malcolm Hodge
 1 years ago
 Views:
Transcription
1 Autonomous Agents and MultiAgent Systems manuscript No. (will be inserted by the editor) An Inquiry Dialogue System Elizabeth Black Anthony Hunter Received: date / Accepted: date Abstract The majority of existing work on agent dialogues considers negotiation, persuasion or deliberation dialogues; we focus on inquiry dialogues, which allow agents to collaborate in order to find new knowledge. We present a general framework for representing dialogues and give the details necessary to generate two subtypes of inquiry dialogue that we define: argument inquiry dialogues allow two agents to share knowledge to jointly construct arguments; warrant inquiry dialogues allow two agents to share knowledge to jointly construct dialectical trees (essentially a tree with an argument at each node in which a child node is a counter argument to its parent). Existing inquiry dialogue systems only model dialogues, meaning they provide a protocol which dictates what the possible legal next moves are but not which of these moves to make. Our system not only includes a dialoguegame style protocol for each subtype of inquiry dialogue that we present, but also a strategy that selects exactly one of the legal moves to make. We propose a benchmark against which we compare our dialogues, being the arguments that can be constructed from the union of the agents beliefs, and use this to define soundness and completeness properties that we show hold for all inquiry dialogues generated by our system. Keywords agent interaction argumentation inquiry dialogue cooperation 1 Introduction Dialogue games are now a common approach to characterizing argumentationbased agent dialogues (e.g. [33,39,42]). Dialogue games are normally made up of a set of communicative acts called moves, and sets of rules stating: which moves it is legal to make at any point in a dialogue (the protocol); the effect of making a move; and when a dialogue terminates. One attraction of dialogue games is that it is possible E. Black COSSAC: IRC in Cognitive Science and Systems Engineering ( Department of Engineering Science, University of Oxford, Oxford, UK A. Hunter Department of Computer Science, University College London, London, UK
2 2 to embed games within games, allowing complex conversations made up of nested dialogues of more than one type (e.g. [34,45]). Most of the work so far has looked only at modelling different types of dialogue from the influential Walton and Krabbe typology [49], meaning that they provide a protocol which dictates what the possible legal next moves are but not which one of these legal moves to make. Here we present a generative system, as we not only provide a protocol but also provide a strategy for selecting exactly one of the legal moves to make. Examples of dialogue systems which model each of the five main Walton and Krabbe dialogue types are: informationseeking [30, 40] (where participants aim to share knowledge); inquiry [32, 40] (where participants aim to jointly discover new knowledge); persuasion [4,17] (where participants aim to resolve conflicts of opinion); negotiation [3,30, 35,47] (where participants who need to cooperate aim to agree on a method for doing this that resolves their conflicting interests); and deliberation [29] (where participants aim to jointly decide on a plan of action). Walton and Krabbe classify their dialogue types according to three characteristics: the initial situation from which the dialogue arises; the main goal of the dialogue, to which all the participating agents subscribe; and the personal aims of each individual agent. This article focuses on two subtypes of inquiry dialogue that we define. Walton and Krabbe define an inquiry dialogue as arising from an initial situation of general ignorance and as having the main goal to achieve the growth of knowledge and agreement. Each individual participating in an inquiry dialogue has the goal to find a proof or destroy one [49, page 66]. We have previously proposed a dialogue system [12] for generating a subtype of inquiry dialogue that we call argument inquiry. In an argument inquiry dialogue, the proof that the participating agents are jointly searching for takes the form of an argument for the topic of the dialogue. In this article we adapt the system proposed in [12] to generate a second subtype of inquiry dialogue that we call warrant inquiry. In a warrant inquiry dialogue, the proof that the participating agents are jointly searching for takes the form of a dialectical tree (essentially a tree with an argument at each node, where each child node is a counter argument to its parent and that has at its root an argument whose claim is the topic of the dialogue). Warrant inquiry dialogues are so called as the dialectical tree produced during the dialogue may act as a warrant for the argument at its root. The goal, then, of the participants in an argument inquiry dialogue is to share beliefs in order to jointly construct arguments for a specific claim that none of the individual participants may construct from their own personal beliefs alone; the goal of agents taking part in a warrant inquiry dialogue is to share arguments in order to jointly construct a dialectical tree that none of the individual participants may construct from their own personal beliefs alone. In an argument inquiry dialogue, the agents wish to exchange beliefs in order to jointly construct arguments for a particular claim; however, an argument inquiry dialogue does not allow the agents to determine the acceptability of the arguments constructed (i.e. whether the arguments are ultimately defeated by any other conflicting arguments). In a warrant inquiry dialogue, the agents are interested in determining the acceptability of a particular argument; they do this by jointly constructing a dialectical tree that collects all the arguments that may be relevant to the acceptability of the argument in question. Argument inquiry dialogues are often embedded within warrant inquiry dialogues. Without embedded argument inquiry dialogues, the arguments that can be exchanged within a warrant inquiry dialogue potentially miss out on useful arguments that involve
3 unexpressed beliefs of the other agent. (We have presented the system for generating argument inquiry dialogues previously [12]; we present it again here as it is necessary for generating warrant inquiry dialogues.) The main contribution of this article is a protocol and strategy sufficient to generate sound and complete warrant inquiry dialogues. As far as we are aware, there are only two other groups that have proposed inquiry protocols. Amgoud, Maudet, Parsons and Wooldridge proposed a protocol for general argument inquiry dialogues (e.g. [3, 39]), however this protocol can lead to unsuccessful dialogues in which no argument for the topic is found even when such an argument does exist in the union of the two agents beliefs. In [32], McBurney and Parsons present an inquiry protocol that is similar in spirit to our warrant inquiry protocol in that it allows the agents involved to dialectically reason about the acceptability of an argument given a set of arguments and the counter argument relations between them. Although their protocol allows agents to exchange arguments in order to carry out the dialectical reasoning, it does not allow the agents to jointly construct arguments and so the reasoning that they participate in may be incomplete in the sense that it may miss important arguments that can only be constructed jointly by two or more agents. Neither of these groups have proposed a strategy for use with their inquiry protocol, i.e. their systems model inquiry dialogues but are not sufficient to generate them. A key contribution of this work is that we not only provide a protocol for modelling inquiry dialogues but we also provide a specific strategy to be followed, making this system sufficient to also generate inquiry dialogues. Other works have also considered the generation of dialogues. For example, [44] gives an account of the different factors which must be considered when designing a dialogue strategy. Parsons et al. [39] explore the effect of different agent attitudes, which reduce the set of legal moves from which an agent must choose a move but do not select exactly one of the legal moves to make. Pasquier et al. s cognitive coherence theory [41] addresses the pragmatic issue of dialogue generation, but it is not clear what behaviour this would produce. Both [2] and [31] propose a formalism for representing the private strategy of an agent to which argumentation is then applied to determine the move to be made at a point in a dialogue; however, neither give a specific strategy for inquiry dialogues. Whilst much of the work on argumentation has been intended for use in adversarial domains such as law (e.g. [5,7,10,28,43]), we have been inspired by the cooperative medical domain. In adversarial domains, agents participating in a dialogue are typically concerned with defending their own arguments and defeating the arguments of their opponents; in cooperative domains, agents instead aim to arrive at the best joint outcome, even if this means accepting the defeat of their own arguments. Medical knowledge is typically uncertain and often incomplete and inconsistent, making argumentation an attractive approach for carrying out reasoning and decision making in the medical domain [24]. Inquiry dialogues are a type of dialogue that are of particular use in the medical domain, where it is often the case that people have distinct types of knowledge and so need to interact with others in order to have all the information necessary to make a decision. Another important characteristic of the medical domain is that is safetycritical [23]; if our dialogue system is to be used in such a domain, it is essential that the dialogues our system produces arrive at the appropriate outcome. We wish the outcome of our dialogues to be predetermined by the fixed protocol, the strategy being followed and the belief bases of the participating agents; i.e. given the agents beliefs, we want to know what outcome they will arrive at and that this will be the appropriate outcome. 3
4 4 As discussed in [39], this can be be viewed as a positive or negative feature of a dialogue system depending on the application. In a more competitive environment it may well be the case that one would wish it to be possible for agents to behave in an intelligent manner in order to influence the outcome of a dialogue. However, we want our dialogues to always lead to the ideal outcome. That is to say, we want the dialogues generated by our system to be sound and complete, in relation to some standard benchmark. We compare the outcome of our dialogues with the outcome that would be arrived at by a single agent that has as its beliefs the union of both the agents participating in the dialogue s beliefs. This is, in a sense, the ideal situation, where there are clearly no constraints on the sharing of beliefs. As the dialogue outcome we are aiming for is the same as the outcome we would arrive at if reasoning with the union of the agents beliefs, a natural question to ask is why not simply pool the agents beliefs and then reason with this set? In some situations, it may indeed be more appropriate to pool the agents beliefs (e.g. as part of computer supported collaborative learning, [27]); however, in many real world scenarios, such as within the medical domain, there are often privacy issues that would restrict the agents from simply pooling all beliefs; what we provide here can be viewed as a mechanism for a joint directed search that ensures the agents only share beliefs that could be relevant to the topic of the dialogue. It could also be the case that the belief bases of the agents are so vast that the communication costs involved in pooling all beliefs would be prohibitive. The main contribution of this article is a system for generating sound and complete warrant inquiry dialogues. We build on the system we proposed in [12] in the sense that we use the same underlying formalism for modelling and generating dialogues. However, whilst in [12] we provided only a protocol and strategy for generating sound and complete argument inquiry dialogues, here we also include a protocol and strategy for generating warrant inquiry dialogues and give soundness and completeness results for all inquiry dialogues generated by our system. We have presented the details relating to argument inquiry dialogues (that were previously given in [12]) again in this article as they are necessary for generating warrant inquiry dialogues. The rest of this article proceeds as follows. In Section 2 we present the knowledge representation used by the agents in our system and define how this knowledge can be used to construct arguments, and in Section 3 we present a method for constructing a dialectical tree in order to carry out a dialectical analysis of a set of arguments. Sections 2 and 3 thus present the argumentation system on which this dialogue system operates, which is based on García and Simari s Defeasible Logic Programming (DeLP) [25]. The presentation of the argumentation system here differs only slightly from that in [25] and does not represent a contribution of this work. In Section 4 we define the general framework used to represent dialogues. In Section 5 we give the protocols for modelling both argument inquiry and warrant inquiry dialogues, and also give a strategy for use with these protocols that allows agents to generate the dialogues (completing our inquiry dialogue system). In Section 6 we define soundness and completeness properties for both argument inquiry and warrant inquiry dialogues and show that these properties hold for all well formed inquiry dialogues generated by our system. Finally, in Section 7 we discuss other work related to this and in Section 8 we summarise our conclusions.
5 5 2 Knowledge representation and arguments We adapt García and Simari s Defeasible Logic Programming (DeLP) [25] for representing each agent s beliefs. DeLP is a formalism that combines logic programming with defeasible argumentation. It is intended to allow a single agent to reason internally with inconsistent and incomplete knowledge that may change dynamically over time and has been shown to be applicable in different realworld contexts (e.g. [14, 26]). It provides a warrant procedure, which we will present in Section 3, that applies a dialectical reasoning mechanism to a set of arguments in order to decide whether a particular argument from that set is warranted. The presentation here differs only slightly from that in [25]. García and Simari assume that, as well as a set of defeasible rules, there is a set of strict rules. They also assume that facts are nondefeasible. As we are inspired by the medical domain (in which we know knowledge to often be incomplete, unreliable and inconsistent), we wish all knowledge to be treated as defeasible. We deal with this by assuming the sets of strict rules and facts are empty and by defining a defeasible fact (essentially a defeasible rule with an empty body). We use a restricted set of propositional logic and assume and that a literal is either an atom α or a negated atom α. We use the notation to represent the classical consequence relation; we use to represent classical contradiction; we use α to represent the complement of α, i.e. α = a if and only if α = a, α = a if and only if α = a. Definition 1 A defeasible rule is denoted α 1... α n α 0 where α i is a literal for 0 i n. A defeasible fact is denoted α where α is a literal. The warrant procedure defined in [25] assumes that a formal criterion exists for comparing two arguments. We use a preference ordering across all knowledge, from which a preference ordering across arguments is derived. We imagine that the preference ordering on medical knowledge would depend on the knowledge source [50]; we might assume, for example, that knowledge from an established clinical guideline is preferred to knowledge that has resulted from a small clinical trial. We associate a preference level with a defeasible rule or defeasible fact to form a belief; the lower the preference level the more preferred the belief. Definition 2 A belief is a pair (φ, L) where φ is either a defeasible fact or a defeasible rule, and L {1, 2, 3,...} is a label that denotes the preference level of the belief. The function plevel returns the preference level of the belief: plevel((φ, L)) = L. The set of all beliefs is denoted B. We make a distinction between beliefs in defeasible facts (called state beliefs, as these are beliefs about the state of the world) and beliefs in defeasible rules (called domain beliefs, as these are beliefs about how the domain is expected to behave). We also consider the set of defeasible facts and the set of defeasible rules, and the union of these two sets. Definition 3 A state belief is a belief (φ, L) where φ is a defeasible fact. The set of all state beliefs is denoted S. A domain belief is a belief (φ, L) where φ is a defeasible rule. The set of all domain beliefs is denoted R. The set of all defeasible facts is denoted S = {φ (φ, L) S}. The set of all defeasible rules is denoted R = {φ (φ, L) R}. The set of all defeasible rules and all defeasible facts is denoted B = {φ (φ, L) B} = S R.
6 6 We assume that there are always exactly two agents (participants) taking part in a dialogue, each with its own identifier taken from the set I = {1, 2}. Although we have restricted the number of participants to two here for the sake of simplicity, we believe it is straightforward to adapt the system to allow multiple participants. Many of the difficult issues associated with multiparty dialogues (e.g. What are the agents roles? How to manage turn taking? Who should be addressed with each move? [18]) can be easily overcome here due to the collaborative and exhaustive nature of the dialogues we are considering (e.g. all agents have the same role; each agent is assigned a place in a sequence and that sequence is followed for turn taking; each move made is broadcast to all the participants). We are currently working on multiparty dialogues that use the framework presented here. Each agent has a, possibly inconsistent, belief base. Definition 4 A belief base associated with an agent x is a finite set, denoted Σ x, such that Σ x B and x I = {1, 2}. Example 1 Consider the following belief base associated with agent 1. { } Σ 1 (a, 1), ( a, 1), (b, 2), (d, 1), = (a c, 3), (b c, 2), (d b, 1), ( a b, 1) The top four elements are state beliefs and we can see that the agent conflictingly believes strongly in both a and a. The bottom four elements are all domain beliefs. plevel((a, 1)) = 1. plevel(( a, 1)) = 1. plevel((b, 2)) = 2. plevel((d, 1)) = 1. plevel((a c, 3)) = 3. plevel((b c, 2)) = 2. plevel((d b, 1)) = 1. plevel(( a b, 1)) = 1. Recall that the lower the plevel value, the more preferred the belief. We now define what constitutes a defeasible derivation. This has been adapted slightly from [25] in order to deal with our definition of a belief. Definition 5 Let Ψ be a set of beliefs and α a defeasible fact. A defeasible derivation of α from Ψ, denoted Ψ α, is a finite sequence α 1, α 2,..., α n of literals such that α n is the defeasible fact α and each literal α m (1 m n) is in the sequence because: (α m, L) is a state belief in Ψ, or (β 1... β j α m, L ) Ψ s.t. every literal β i (1 i j) is an element α k preceding α m in the sequence (k < m). The function DefDerivations : (B) S returns the set of literals that can be defeasibly derived from a set of beliefs Ψ such that DefDerivations(Ψ) = {φ there exists Φ Ψ such that Φ φ}. Example 2 If we continue with the running example started in Example 1 we see that the following defeasible derivations exist from Σ 1. a. a. b. d. a, c. b, c. d, b. a, b. We now define an argument as being a minimally consistent set from which the claim can be defeasibly derived. Definition 6 An argument A constructed from a set of, possibly inconsistent, beliefs Ψ (Ψ B) is a tuple Φ, φ where φ is a defeasible fact and Φ is a set of beliefs such that:
7 7 1. Φ Ψ, 2. Φ φ, 3. φ, φ s.t. Φ φ and Φ φ, it is not the case that φ φ, 4. there is no subset of Φ that satisfies (13). Φ is called the support of the argument and is denoted Support(A); φ is called the claim of the argument and is denoted Claim(A). For two arguments A 1 and A 2, A 1 is a subargument of A 2 iff Support(A 1 ) Support(A 2 ). The set of all arguments that can be constructed from a set of beliefs Ψ is denoted A(Ψ). Example 3 Continuing the running example, the following arguments can be constructed by the agent. a 1 = {(a, 1)}, a, a 5 = {(a, 1), (a c, 3)}, c, A(Σ 1 a ) = 2 = {( a, 1)}, a, a 6 = {(b, 2), (b c, 2)}, c, a 3 = {(b, 2)}, b, a 7 = {(d, 1), (d b, 1)}, b, a 4 = {(d, 1)}, d, a 8 = {( a, 1), ( a b, 1)}, b } Note, a 1 is a subargument of a 5, a 2 is a subargument of a 8, a 3 is a subargument of a 6 and a 4 is a subargument of a 7. Every argument is a subargument of itself. As Ψ may be inconsistent, there may be conflicts between arguments within A(Ψ). Definition 7 Let A 1 and A 2 be two arguments. A 1 is in conflict with A 2 iff Claim(A 1 ) Claim(A 2 ) (i.e. Claim(A 1 ) = Claim(A 2 ), as the claim of an argument is always a literal). Example 4 Continuing the running example, a 1 is in conflict with a 2, a 3 is in conflict with a 7, a 3 is in conflict with a 8, and a 5 is in conflict with a 6. Note that, as we are using to represent classical implication, two arguments are in conflict with one another if and only if their claims are the complement of one another. For certain applications, we may need a different definition of conflict between two arguments, depending on the meaning of the arguments claims and the purpose of the argumentation process; for example, if the purpose of the argumentation is to decide between alternative actions to try to achieve a goal, we may want to define two arguments as being in conflict if their claims are two distinct actions. We now define the attack relationship between arguments. Definition 8 Let A 1, A 2 and A 3 be arguments such that A 3 is a subargument of A 2. A 1 attacks A 2 at subargument A 3 iff A 1 is in conflict with A 3. Note that in the previous definition the subargument A 3 is unique, as we will now show. A 1 and A 3 conflict, and so the claim of A 1 is the negation of the claim of A 3. Let us assume that there is another disagreement subargument such that A 1 attacks A 2 at A 4, A 1 and A 4 conflict, and so the claim of A 1 is the negation of the claim of A 4. As A 1 is a literal, this means that the claim of A 4 is the same as the claim of A 3. As an argument is minimal, this means that A 3 and A 4 must be the same arguments. Example 5 Continuing the running example: a 1 attacks a 2 at a 2, a 1 attacks a 8 at a 2, a 2 attacks a 1 at a 1, a 2 attacks a 5 at a 1, a 3 attacks a 7 at a 7, a 3 attacks a 8 at a 8, a 5 attacks a 6 at a 6, a 6 attacks a 5 at a 5, a 7 attacks a 3 at a 3, a 7 attacks a 6 at a 3, a 8 attacks a 3 at a 3, and a 8 attacks a 6 at a 3.
8 8 Given that we know one argument attacks another, we need a mechanism for deciding whether the attacking argument successfully defeats the argument being attacked or not. We base this on the preference level of the argument, which is equal to that of the least preferred belief used in its support. Definition 9 Let A be an argument. The preference level of A, denoted plevel(a), is equal to plevel(φ) such that: 1. φ Support(A), 2. φ Support(A), plevel(φ ) plevel(φ). Given that we have an argument A 1 that attacks an argument A 2 at subargument A 3, we say that A 1 defeats A 2 if the preference level of A 1 is the same as or less (meaning more preferred) than the preference level of A 3. If it is the same, then A 1 is a blocking defeater for A 2 ; if it is less, then A 1 is a proper defeater for A 2. Definition 10 Let A 1, A 2 and A 3 be arguments such that A 3 is a subargument of A 2 and A 1 attacks A 2 at subargument A 3. A 1 is a proper defeater for A 2 iff plevel(a 1 ) < plevel(a 3 ). A 1 is a blocking defeater for A 2 iff plevel(a 1 ) = plevel(a 3 ). Example 6 Continuing the running example: a 1 is a blocking defeater for a 2, a 1 is a blocking defeater for a 8, a 2 is a blocking defeater for a 1, a 2 is a blocking defeater for a 5, a 6 is a proper defeater for a 5, a 7 is a proper defeater for a 3, a 7 is a proper defeater for a 6, a 8 is a proper defeater for a 3 and a 8 is a proper defeater for a 6. In this section we have proposed a criterion for deciding whether an argument A 1 that attacks an argument A 2 defeats it or not. In the next section we introduce the warrant procedure from [25], which allows an agent to decide whether, given a set of interacting arguments, a particular argument from this set is ultimately defeated or undefeated. 3 Dialectical analysis of arguments Given a set of beliefs Ψ and an argument A 1 A(Ψ), in order to know whether A 1 is defeated or not, an agent has to consider each argument from A(Ψ) that attacks A 1 and decide whether or not it defeats it. However, a defeater of A 1 may itself be defeated by another argument A 2 A(Ψ). Defeaters may also exist for A 2, themselves of which may also have defeaters. Therefore, in order to decide whether A 1 is defeated, an agent has to consider all defeaters for A 1, all of the defeaters for those defeaters, and so on. Following [25], it does so by constructing a dialectical tree, where each node is labelled with an argument, the root node is labelled A 1, and the arcs represent the defeat relation between arguments. Each path through the dialectical tree from root node to a leaf represents an argumentation line, where each argument in such a path defeats its predecessor. García and Simari impose some extra constraints on what is an acceptable argumentation line. This is because they wish to ensure that their system avoids such things as circular argumentation and that it imposes properties such as concordance between supporting or interfering arguments. For more information on acceptable argumentation lines and their motivation the reader should refer to [25].
9 9 Definition 11 If Λ = [ Φ 0, φ 0, Φ 1, φ 1, Φ 2, φ 2,...] is a sequence of arguments such that each element of the sequence Φ i, φ i is a defeater (proper or blocking) of its predecessor Φ i 1, φ i 1, then Λ is an argumentation line. Λ is an acceptable argumentation line iff 1. Λ is a finite sequence, 2. Φ 0 Φ 2 Φ 4... and Φ 1 Φ 3 Φ 5..., 3. no argument Φ k, φ k appearing in Λ is a subargument of an argument A j that appears earlier in Λ (j < k), 4. i s.t. Φ i, φ i is a blocking defeater for Φ i 1, φ i 1, if Φ i+1, φ i+1 exists, then Φ i+1, φ i+1 is a proper defeater for Φ i, φ i. Example 7 Continuing the running example, the following are all examples of argumentation lines, however only Λ 2 is an acceptable argumentation line. Λ 1 is not acceptable as it breaks constraints (3) and (4), whereas Λ 3 is not acceptable as it breaks constraint (2). Λ 1 = [a 5, a 2, a 1, a 2, a 1, a 2 ] Λ 2 = [a 5, a 6, a 7 ] Λ 3 = [a 5, a 6, a 8 ] In order to determine whether the claim of an argument A 0 is warranted given a set of beliefs Ψ, the agent must consider every acceptable argumentation line that it can construct from A(Ψ) which starts with A 0. It does this by constructing a dialectical tree. Definition 12 Let Ψ be a, possibly inconsistent, belief base and A 0 be an argument such that A 0 A(Ψ). A dialectical tree for A 0 constructed from Ψ, denoted T(A 0, Ψ), is defined as follows. 1. The root of the tree is labelled with A Let N be a node of the tree labelled A n and let Λ i = [A 0,..., A n] be the sequence of labels on the path from the root to node N. Let arguments B 1, B 2,..., B k be all the defeaters for A n that can be formed from Ψ. For each defeater B j (1 j k), if the argumentation line Λ i = [A 0,..., A n, B j ] is an acceptable argumentation line, then the node N has a child N j that is labelled B j. If there is no defeater for A n or there is no B j such that Λ i is acceptable, then N is a leaf node. To ensure that the construction of a dialectical tree is a finite process we must show that it is not possible to construct an infinite argumentation line that meets conditions 24 of an acceptable argumentation line (Definition 11). Fortunately, we can show that if condition 3 holds (subarguments of arguments that appear earlier in an argumentation line cannot be repeated) and the arguments in the argumentation line come from a finite set (as they do in the construction of a dialectical tree), then the argumentation line is finite.
10 10 Proposition 1 Let Λ = [A 0, A 1, A 2,...] be an argumentation line such that, for any A i appearing in Λ, A i A(Φ) where Φ is a finite set of beliefs. If no argument A k appearing in Λ is a subargument of an argument A j that appears earlier in Λ (j < k), then Λ is a finite sequence of arguments. Proof: Since Φ is a finite set, it follows from the definition of an argument (Def. 6) that A(Φ) is also a finite set. Since the arguments from the sequence Λ come from this finite set, and since we are constrained that no argument in Λ can be repeated, it follows that Λ is a finite sequence. From the above proposition we see that, although we may go on constructing a dialectical tree indefinitely and never know if a branch violated condition 1 of an acceptable argumentation line, by meeting condition 3 of an acceptable argumentation line we ensure that this will never be the case. Note that the root node of a dialectical tree T is denoted Root(T). Also note, we define two dialectical trees as being equal if and only if, if a sequence of labels appears from the root node to a node in one tree then it also appears as a sequence of labels from the root node to a node in the other. Our definition of dialectical tree equality takes into account the labels of all nodes that appear in the trees (as opposed to, for example, Chesñevar et al. s definition of isomorphic dialectical trees [15], where blocking and proper defeaters are distinguished but no general constraint is made on the arguments that appear at the nodes of the trees). We have taken this approach as we will later show that the dialectical tree constructed by two agents during a warrant inquiry dialogue is equal to the dialectical tree that could be constructed from the union of their beliefs. Definition 13 The dialectical trees T 1 and T 2 are equal to one another iff 1. the root of T 1 is labelled with A 0 iff the root of T 2 is labelled with A 0, 2. if N 1 is a node in T 1 and [A 0,..., A n] is the sequence of labels on the path from the root of T 1 to N 1, then there is an N 2 s.t. N 2 is a node in T 2 and [A 0,..., A n] is the sequence of labels on the path from the root of T 2 to N 2, 3. if N 2 is a node in T 2 and [A 0,..., A n] is the sequence of labels on the path from the root of T 2 to N 2, then there is an N 1 s.t. N 1 is a node in T 1 and [A 0,..., A n] is the sequence of labels on the path from the root of T 1 to N 1. Following from [25], in order to determine whether the root of a dialectical tree is undefeated or not, we have to recursively mark each node in the tree as D (defeated) or U (undefeated), dependent on whether it has any undefeated child nodes that are able to defeat it. Definition 14 Let T(A, Ψ) be a dialectical tree. The corresponding marked dialectical tree of T(A, Ψ) is obtained by marking every node in T(A, Ψ) as follows. 1. All leaves in T(A, Ψ) are marked U. 2. If N is a node of T(A, Ψ) and N is not a leaf node, then N will be marked U iff every child of N is marked D. The node N will be marked D iff it has at least one child marked U. Example 8 Following the running example, the corresponding marked dialectical tree of T(a 5, Σ 1 ) is shown in Figure 1. Note that the arguments a 1 and a 8 do not appear in
11 11 {(a, 1), (a c, 3)}, c D {( a, 1)}, a U {b, 2), (b c, 2)}, c D {(d, 1), (d b, 1)}, b U Fig. 1 A marked dialectical tree, T(a 5, Σ 1 ). the tree even though they are defeaters of arguments that do appear in the tree, this is because their inclusion would break conditions of an acceptable argumentation line (Definition 11); the inclusion of a 1 would break conditions 3 and 4, the inclusion of a 8 would break condition 2. The function Status takes an argument A and a set of beliefs Ψ. If the root node of the dialectical tree that has A at its root and is constructed from Ψ is marked U, then Status returns U; if it is marked D, then Status returns D. Definition 15 The status of an argument A given a set of beliefs Ψ is returned by the function Status : A(B) (B) {U, D} such that Status(A, Ψ) = U iff Root(T(A, Ψ)) is marked U in the corresponding marked dialectical tree of T(A, Ψ), and Status(A, Ψ) = D iff Root(T(A, Ψ)) is marked with D in the corresponding marked dialectical tree of T(A, Ψ). The claim of an argument is warranted by the belief base if and only if the status of the root of the associated dialectical tree is U. Example 9 Following the running example, as the root node of the the tree T(a 5, Σ 1 ) is marked D (shown in Figure 1), and hence Status(a 5, Σ 1 ) = D, we see that the argument a 5 is not warranted by the belief base Σ 1. This warrant procedure makes it possible for a single agent to reason with incomplete, inconsistent and uncertain knowledge. In the following two sections we propose a dialogue system that allows two agents to use this warrant procedure to jointly reason with their beliefs. 4 Representing dialogues In this section we provide a general framework for representing dialogues. We define how dialogues may be nested one within another, what it means for a dialogue to terminate, and what we mean when we refer to the current dialogue. The framework given in this section is not specific to warrant inquiry and argument inquiry dialogues (the details of which will not come until the following section) and it can be used to represent dialogues of other types (e.g. [13]). The communicative acts in a dialogue are called moves. We assume that there are always exactly two agents (participants) taking part in a dialogue, each with its own
12 12 Move open assert close Format x, open, dialogue(θ, γ) x, assert, Φ, φ x, close, dialogue(θ, γ) Table 1 The format for moves used in warrant inquiry and argument inquiry dialogues, where x I, Φ, φ is an argument, and either θ = wi (for warrant inquiry) and γ S (i.e. γ is a defeasible fact), or θ = ai (for argument inquiry) and γ R (i.e. γ is a defeasible rule). identifier taken from the set I = {1, 2}. Each participant takes it in turn to make a move to the other participant. For a dialogue involving participants 1, 2 I, we also refer to participants using the variables x and ˆx such that if x is 1 then ˆx is 2 and if x is 2 then ˆx is 1. A move in our framework is of the form Agent, Act, Content. Agent is the identifier of the agent generating the move, Act is the type of move, and the Content gives the details of the move. The format for moves used in warrant inquiry and argument inquiry dialogues is shown in Table 1, and the set of all moves meeting the format defined in Table 1 is denoted M. Note that the framework allows for other types of dialogues to be generated and these might require the addition of extra moves (e.g. such as those suggested in [13]). Also, Sender : M I is a function such that Sender( Agent, Act, Content ) = Agent. A dialogue is simply a sequence of moves, each of which is made from one participant to the other. As a dialogue progresses over time, we denote each timepoint by a natural number (N = {1, 2, 3,...}). Each move is indexed by the timepoint when the move was made. Exactly one move is made at each timepoint. The dialogue itself is indexed with two timepoints, indexing the first and last moves of the dialogue. Definition 16 A dialogue, denoted D t r, is a sequence of moves [m r,..., m t ] involving two participants in I = {1, 2}, where r, t N and r t, such that: 1. the first move of the dialogue, m r, is a move of the form x, open, dialogue(θ, γ), 2. Sender(m s) I (r s t), 3. Sender(m s) Sender(m s+1 ) (r s < t). The type of the dialogue D t r is returned by Type(D t r) such that Type(D t r) = θ (i.e. the type of the dialogue is determined by the content of the first move made). The topic of the dialogue D t r is returned by Topic(D t r) such that Topic(D t r) = γ (i.e. the topic of the dialogue is determined by the content of the first move made). The set of all dialogues is denoted D. The first move of a dialogue D t r must always be an open move (condition 1 of the previous definition), every move of the dialogue must be made to a participant of the dialogue (condition 2), and the agents take it in turns to make moves (condition 3). The type and the topic of a dialogue are determined by the content of the first move made; if the first move made in a dialogue is x, open, dialogue(θ, γ), then the type of the dialogue is θ and the topic of the dialogue is γ. In this article, we consider two different types of dialogue (i.e. two different values for θ): wi (for warrant inquiry) and ai (for argument inquiry). If a dialogue is a warrant inquiry dialogue, then its topic must be a defeasible fact; if a dialogue is an argument inquiry dialogue, then its topic must be a defeasible rule; these are requirements of the format of open moves defined in Table 1. Although we consider only warrant inquiry and argument inquiry dialogues
13 13 here, our definition of a dialogue is general so as to allow dialogues of other types to be considered within our framework. We now define some terminology that allows us to talk about the relationship between two dialogues. Definition 17 Let Dr t and Dr t1 1 be two dialogues. Dr t1 1 is a subdialogue of Dr t iff Dr t1 1 is a subsequence of Dr t (r < r 1 t 1 t). Dr t is a toplevel dialogue iff r = 1; the set of all toplevel dialogues is denoted D top. D1 t is a topdialogue of Dr t iff either the sequence D1 t is the same as the sequence Dr t or Dr t is a subdialogue of D1. t If Dr t is a sequence of n moves, Dr t2 Dr. t extends D t r iff the first n moves of D t2 r are the sequence In order to terminate a dialogue, two close moves must appear next to each other in the sequence (called a matchedclose); this means that each participating agent must agree to the termination of the dialogue. A close move is used to indicate that an agent wishes to terminate the dialogue; it may be the case, however, that the other agent still has something it wishes to say, which may in turn cause the original agent to change its mind about wishing to terminate the dialogue. Definition 18 Let D t r be a dialogue of type θ {wi, ai} with participants I = {1, 2} such that Topic(D t r) = γ. We say that m s (r < s t) is a matchedclose for D t r iff m s 1 = x, close, dialogue(θ, γ) and m s = ˆx, close, dialogue(θ, γ). So a matchedclose will terminate a dialogue D t r but only if D t r has not already terminated and any subdialogues that are embedded within D t r have already terminated; this notion will be needed later on to define wellformed inquiry dialogues. Definition 19 Let D t r be a dialogue. D t r terminates at t iff the following conditions hold: 1. m t is a matchedclose for Dr, t 2. Dr t1 s.t. Dr t1 terminates at t 1 and Dr t extends Dr t1, 3. Dr t1 1 if Dr t1 1 is a subdialogue of Dr, t then Dr t2 1 s.t. Dr t2 1 terminates at t 2 and either Dr t2 1 and D t2 extends Dr t1 1 r 1 is a subdialogue of Dr. t or D t1 r 1 extends D t2 r 1, As we are often dealing with multiple nested dialogues it is often useful to refer to the current dialogue, which is the innermost dialogue that has not yet terminated. As dialogues of one type may be nested within dialogues of another type, an agent must refer to the current dialogue in order to know which protocol to follow. Definition 20 Let Dr t be a dialogue. The current dialogue is given by Current(Dr) t such that Current(Dr) t = Dr t 1 (1 r r 1 t) where the following conditions hold: 1. m r1 = x, open, dialogue(θ, γ) for some x I, some γ B and some θ {wi, ai}, 2. Dr t1 2 if Dr t1 2 is a subdialogue of Dr t 1, then Dr t2 2 s.t. either Dr t2 2 extends Dr t1 2 or Dr t1 2 extends Dr t2 2, and Dr t2 2 is a subdialogue of Dr t 1 and Dr t2 2 terminates at t 2,
14 14 3. Dr t3 1 s.t. Dr t 1 extends Dr t3 1 and Dr t3 1 terminates at t 3. If the above conditions do not hold then Current(D t r) = null. The topic of the current dialogue is returned by the function ctopic(d t r) such that ctopic(d t r) = Topic(Current(D t r)). The type of the current dialogue is returned by the function ctype(d t r) such that ctype(d t r) = Type(Current(D t r)). We now give a schematic example of nested dialogues. Example 10 An example of nested dialogues is shown in Figure 2. In this example: Current(D1) t = D1 t Current(D1 k 1 ) = Dj k 1 Current(Di t Current(Di k ) = Dk i Current(Di k 1 Current(D1 t 1 ) = Di t 1 Current(D k 1 j Current(D1 k ) = Di k ) = Di t 1 Current(Dj k ) = null ) = null Current(Dt 1 i ) = D k 1 j ) = Dj k 1 We have now defined our general framework for representing dialogues, in the following section we give the details needed to generate argument inquiry and warrant inquiry dialogues. 5 Generating dialogues In this section we give the details specific to argument inquiry and warrant inquiry dialogues that, along with the general framework given in the previous section, comprise our inquiry dialogue system. In Sections 5.1 and 5.2 we give the protocols needed to model legal argument inquiry and warrant inquiry dialogues, we define what a wellformed argument inquiry dialogue is and what a wellformed warrant inquiry dialogue is (a dialogue that terminates and whose moves are legal according to the relevant protocol), and we define what the outcomes of the two dialogue types are. In Section 5.3 we give the details of a strategy that can be used to generate legal argument inquiry and warrant inquiry dialogues (i.e. that allows an agent to select exactly one of the legal moves to make at any point in the dialogue). We adopt the common approach of associating a commitment store with each agent participating in a dialogue (e.g. [34,39]). A commitment store is a set of beliefs that the agent is publicly committed to as the current point of the dialogue (i.e. that they have asserted). As a commitment store consists of things that the agent has already publicly declared, its contents are visible to the other agent participating in the dialogue. For this reason, when constructing an argument, an agent may make use of not only its own beliefs but also those from the other agent s commitment store. Definition 21 A commitment store is a set of beliefs denoted CS t x (i.e. CS t x B), where x I is an agent and t N is a timepoint. When an agent enters into a toplevel dialogue of any kind a commitment store is created and persists until that dialogue has terminated (i.e. this same commitment store is used for any subdialogues of the toplevel dialogue). If an agent makes a move asserting an argument, every element of the support is added to the agent s commitment store. This is the only time the commitment store is updated.
15 15 D t 1 [m 1,..., D t i m i,...,..., m t 1, m t] D k j m j,..., m k 1, m k 1 < i < j < k < t 1 D1 t = [m 1,..., m i,..., m j,..., m k, m k+1,..., m t 1, m t] m 1 = P 1, open, dialogue(θ 1, φ 1 ) m i = P i, open, dialogue(θ i, φ i ) m j = P j, open, dialogue(θ j, φ j ) m k 1 = P k 1, close, dialogue(θ j, φ j ) m k = P k, close, dialogue(θ j, φ j ) m t 1 = P t 1, close, dialogue(θ i, φ i ) m t = P t, close, dialogue(θ i, φ i ) Fig. 2 Nested dialogues. D1 t is a top level dialogue that has not yet terminated. Dt i is a subdialogue of D1 t that terminates at t. Dk j is a subdialogue of both Dt 1 and Dt i, that terminates at k. D t 1 is a topdialogue of Dt 1. Dk 1 is a topdialogue of Dk j. Dt 1 is a topdialogue of Dt i. Dk 1 is a topdialogue of D k 1. Definition 22 Commitment store update. Let the current dialogue be Dr t with participants I = {1, 2}. iff t = 0, CSx t = CSx t 1 Φ iff m t = ˆx, assert, Φ, φ, CSx t 1 otherwise. The only move used in argument inquiry or warrant inquiry dialogues that affects an agent s commitment store is the assert move, which causes the support of the argument being asserted to be added to the commitment store; hence the commitment store of an agent participating in an argument inquiry or warrant inquiry dialogue grows monotonically over time. If we were to define more moves in order to allow our system to generate other dialogue types, it would be necessary to define the effect of those moves on an agent s commitment store. For example, a retract move (that causes a belief to be removed from the commitment store) may be necessary for a negotiation dialogue, in which case the commitment store of an agent participating in a negotiation dialogue will not necessarily grow monotonically over time. In the following subsection we give the details needed to allow us to model wellformed argument inquiry dialogues.
16 Modelling argument inquiry dialogues An argument inquiry dialogue is initiated when an agent wants to construct an argument for a certain claim, let us say φ, that it cannot construct alone. If the agent knows of a domain belief whose consequent is that claim, let us say (α 1... α n φ, L), then the agent will open an argument inquiry dialogue with α 1... α n φ as its topic. If, between them, the two participating agents could provide arguments for each of the elements α i (1 i n) in the antecedent of the topic, then it would be possible for an argument for φ to be constructed. We define the query store as the set of literals that could help construct an argument for the consequent of the topic of an argument inquiry dialogue: when an argument inquiry dialogue with topic α 1... α n β is opened, a query store associated with that dialogue is created whose contents are {α 1,..., α n, β}. Throughout the dialogue the participating agents will both try to provide arguments for the literals in the query store. This may lead them to open further nested argument inquiry dialogues that have as a topic a rule whose consequent is a literal in the current query store. Definition 23 For a dialogue Dr t with participants I = {1, 2}, a query store, denoted QS r, is a finite set of literals such that { {α1,..., α n, β} iff m r = x, open, dialogue(ai, α QS r = 1... α n β) otherwise. The query store of the current dialogue is given by cqs(d t r) such that cqs(d t r) = QS r1 iff Current(D t r) = D t r 1. A query store is fixed and is determined by the open move that opens the associated argument inquiry dialogue. If the current dialogue is an argument inquiry dialogue, then the agents will consult the query store of the current dialogue in order to determine what arguments it might be helpful to try and construct (i.e. those whose claim is a member of the current query store). A protocol is a function that returns the set of moves that are legal for an agent to make at a particular point in a particular type of dialogue. Here we give the specific protocol for argument inquiry dialogues. It takes the toplevel dialogue that the agents are participating in and returns the set of legal moves that the agent may make. Definition 24 The argument inquiry protocol is a function Π ai : D top (M). If D t 1 is a toplevel dialogue with participants I = {1, 2} such that Sender(m t ) = ˆx, 1 t and ctopic(d t 1) = γ, then Π ai (D t 1) is where Π assert ai (D1) t Π open ai (D1) t { x, close, dialogue(ai, γ) } Πai assert (D1) t = { x, assert, Φ, φ (1) φ cqs(d1), t (2) t s.t. 1 < t t and m t = x, assert, Φ, φ and x I} Π open ai (D1) t = { x, open, dialogue(ai, β 1... β n α) (1) α cqs(d1), t (2) t s.t. 1 < t t and m t = x, open, β 1... β n α and x I}
17 17 The argument inquiry protocol tells us that an agent may always legally make a close move, indicating that it wishes to terminate the dialogue (recall, however, that the dialogue will only terminate when both agents indicate they wish to terminate the dialogue, leading to a matchedclose). An agent may legally assert an argument that has not previously been asserted, as long as its claim is in the current query store (and so may help the agents in finding an argument for the consequent of the topic of the dialogue). An agent may legally open a new, embedded argument inquiry dialogue providing an argument inquiry dialogue with the same topic has not previously been opened and the of the topic of the new argument inquiry dialogue is a defeasible rule that has a member of the current query store as its consequent (and so any arguments successfully found in this new argument inquiry dialogue may help the agents in finding an argument for the consequent of the topic of the current dialogue). It is straightforward to check conformance with the protocol as it only refers to public elements of the dialogue. We are now able to define a wellformed argument inquiry dialogue. This is a dialogue that starts with a move opening an argument inquiry dialogue (condition 1 of the following definition), that has a continuation which terminates (condition 2) and whose moves conform to the argument inquiry protocol (condition 3). Definition 25 Let D t r be a dialogue with participants I = {1, 2}. D t r is a wellformed argument inquiry dialogue iff the following conditions hold: 1. m r = x, open, dialogue(ai, γ) where x I and γ R (i.e. γ is a defeasible rule), 2. t s.t. t t, Dr t extends Dr, t and Dr t terminates at t, 3. s s.t. r s < t and Dr t extends Dr, s if D1 t is a topdialogue of Dr t and D1 s is a topdialogue of Dr s and D1 t extends D1 s and Sender(m s) = x (where x I), then m s+1 Π ai (D1, s ˆx ) (where ˆx I, ˆx x ). The set of all wellformed argument inquiry dialogues is denoted D ai. We define the outcome of an argument inquiry dialogue as the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the query store. Definition 26 The argument inquiry outcome of a dialogue is given by a function Outcome ai : D ai (A(B)) {}. If D t r is a wellformed argument inquiry dialogue with participants I = {1, 2}, then Outcome ai (D t r) = { Φ, φ A(CS t 1 CS t 2) φ QS r} We are now able to model wellformed argument inquiry dialogues and determine their outcome. In the following subsection we give the details that we need to model wellformed warrant inquiry dialogues. 5.2 Modelling warrant inquiry dialogues The goal of a warrant inquiry dialogue is to jointly construct a dialectical tree whose root is an argument for the topic of the dialogue; the topic of the dialogue is warranted
Sound and Complete Inference Rules for SEConsequence
Journal of Artificial Intelligence Research 31 (2008) 205 216 Submitted 10/07; published 01/08 Sound and Complete Inference Rules for SEConsequence KaShu Wong University of New South Wales and National
More informationMathematical Logic. Tableaux Reasoning for Propositional Logic. Chiara Ghidini. FBKIRST, Trento, Italy
Tableaux Reasoning for Propositional Logic FBKIRST, Trento, Italy Outline of this lecture An introduction to Automated Reasoning with Analytic Tableaux; Today we will be looking into tableau methods for
More informationNotes on Modal Logic
Notes on Modal Logic Notes for Philosophy 151 Eric Pacuit January 28, 2009 These short notes are intended to supplement the lectures and text ntroduce some of the basic concepts of Modal Logic. The primary
More informationFoundations of Artificial Intelligence
Foundations of Artificial Intelligence 7. Propositional Logic Rational Thinking, Logic, Resolution Wolfram Burgard, Bernhard Nebel and Martin Riedmiller AlbertLudwigsUniversität Freiburg Contents 1 Agents
More informationWhy is the number of 123 and 132avoiding permutations equal to the number of binary trees?
Why is the number of  and avoiding permutations equal to the number of binary trees? What are restricted permutations? We shall deal with permutations avoiding some specific patterns. For us, a permutation
More informationf(x) is a singleton set for all x A. If f is a function and f(x) = {y}, we normally write
Math 525 Chapter 1 Stuff If A and B are sets, then A B = {(x,y) x A, y B} denotes the product set. If S A B, then S is called a relation from A to B or a relation between A and B. If B = A, S A A is called
More informationTruth Conditional Meaning of Sentences. Ling324 Reading: Meaning and Grammar, pg
Truth Conditional Meaning of Sentences Ling324 Reading: Meaning and Grammar, pg. 6987 Meaning of Sentences A sentence can be true or false in a given situation or circumstance. (1) The pope talked to
More informationDomain Theory: An Introduction
Domain Theory: An Introduction Robert Cartwright Rebecca Parsons Rice University This monograph is an unauthorized revision of Lectures On A Mathematical Theory of Computation by Dana Scott [3]. Scott
More information2.1.1 Examples of Sets and their Elements
Chapter 2 Set Theory 2.1 Sets The most basic object in Mathematics is called a set. As rudimentary as it is, the exact, formal definition of a set is highly complex. For our purposes, we will simply define
More informationDiscrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 20
CS 70 Discrete Mathematics and Probability Theory Fall 009 Satish Rao, David Tse Note 0 Infinity and Countability Consider a function (or mapping) f that maps elements of a set A (called the domain of
More informationThe mechanics of some formal interagent dialogues
The mechanics of some formal interagent dialogues Simon Parsons 1,2, Peter McBurney 2, Michael Wooldridge 2 1 Department of Computer and Information Science, Brooklyn College, City University of New York,
More informationTesting LTL Formula Translation into Büchi Automata
Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN02015 HUT, Finland
More informationAutomata and Languages
Automata and Languages Computational Models: An idealized mathematical model of a computer. A computational model may be accurate in some ways but not in others. We shall be defining increasingly powerful
More informationUPDATES OF LOGIC PROGRAMS
Computing and Informatics, Vol. 20, 2001,????, V 2006Nov6 UPDATES OF LOGIC PROGRAMS Ján Šefránek Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University,
More informationTREES IN SET THEORY SPENCER UNGER
TREES IN SET THEORY SPENCER UNGER 1. Introduction Although I was unaware of it while writing the first two talks, my talks in the graduate student seminar have formed a coherent series. This talk can be
More informationSAMPLES OF HIGHER RATED WRITING: LAB 5
SAMPLES OF HIGHER RATED WRITING: LAB 5 1. Task Description Lab report 5. (April 14) Graph Coloring. Following the guidelines, and keeping in mind the assessment criteria, using where appropriate experimental
More informationCHAPTER 7 GENERAL PROOF SYSTEMS
CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes
More informationSets and functions. {x R : x > 0}.
Sets and functions 1 Sets The language of sets and functions pervades mathematics, and most of the important operations in mathematics turn out to be functions or to be expressible in terms of functions.
More informationFree groups. Contents. 1 Free groups. 1.1 Definitions and notations
Free groups Contents 1 Free groups 1 1.1 Definitions and notations..................... 1 1.2 Construction of a free group with basis X........... 2 1.3 The universal property of free groups...............
More informationChapter Three. Functions. In this section, we study what is undoubtedly the most fundamental type of relation used in mathematics.
Chapter Three Functions 3.1 INTRODUCTION In this section, we study what is undoubtedly the most fundamental type of relation used in mathematics. Definition 3.1: Given sets X and Y, a function from X to
More informationWhere are we? Introduction to Logic 1
Introduction to Logic 1 1. Preliminaries 2. Propositional Logic 3. Tree method for PL 4. Firstorder Predicate Logic 5. Tree Method for FOL 6. Expressiveness of FOL Where are we? Introduction to Logic
More informationRelational Databases
Relational Databases Jan Chomicki University at Buffalo Jan Chomicki () Relational databases 1 / 18 Relational data model Domain domain: predefined set of atomic values: integers, strings,... every attribute
More informationEmbedding Defeasible Argumentation in the Semantic Web: an ontologybased approach
Embedding Defeasible Argumentation in the Semantic Web: an ontologybased approach Sergio Alejandro Gómez 1 Carlos Iván Chesñevar Guillermo Ricardo Simari 1 Laboratorio de Investigación y Desarrollo en
More informationAssuring Safety in an Air Traffic Control System with Defeasible Logic Programming
Assuring Safety in an Air Traffic Control System with Defeasible Logic Programming Sergio Alejandro Gómez, Anca Goron, and Adrian Groza Artificial Intelligence Research and Development Laboratory (LIDIA)
More informationCS188 Spring 2011 Section 3: Game Trees
CS188 Spring 2011 Section 3: Game Trees 1 WarmUp: ColumnRow You have a 3x3 matrix of values like the one below. In a somewhat boring game, player A first selects a row, and then player B selects a column.
More informationTheorem A graph T is a tree if, and only if, every two distinct vertices of T are joined by a unique path.
Chapter 3 Trees Section 3. Fundamental Properties of Trees Suppose your city is planning to construct a rapid rail system. They want to construct the most economical system possible that will meet the
More information1. Sigma algebras. Let A be a collection of subsets of some fixed set Ω. It is called a σ algebra with the unit element Ω if. lim sup. j E j = E.
Real Analysis, course outline Denis Labutin 1 Measure theory I 1. Sigma algebras. Let A be a collection of subsets of some fixed set Ω. It is called a σ algebra with the unit element Ω if (a), Ω A ; (b)
More informationWe begin by presenting the general shape of the infinite nondeterministic algorithms associated with SRP.
TESTING THE CONSISTENCY OF MATHEMATICS by Harvey M. Friedman* Distinguished University Professor of Mathematics, Philosophy, and Computer Science Emeritus Ohio State University July 23, 2014 EXTENDED ABSTRACT
More informationAutonomous Systems. Carmel Domshlak. Automated (AI) Planning. Introduction. What is planning? Transition systems. Representation. Towards Algorithms
Autonomous Systems Carmel Domshlak Prerequisites Course prerequisites: foundations of AI: search, heuristic search propositional logic: syntax and semantics computational complexity theory: decision problems,
More informationSection 3 Sequences and Limits
Section 3 Sequences and Limits Definition A sequence of real numbers is an infinite ordered list a, a 2, a 3, a 4,... where, for each n N, a n is a real number. We call a n the nth term of the sequence.
More informationDiscrete Mathematics, Chapter 5: Induction and Recursion
Discrete Mathematics, Chapter 5: Induction and Recursion Richard Mayr University of Edinburgh, UK Richard Mayr (University of Edinburgh, UK) Discrete Mathematics. Chapter 5 1 / 20 Outline 1 Wellfounded
More information4 Domain Relational Calculus
4 Domain Relational Calculus We now present two relational calculi that we will compare to RA. First, what is the difference between an algebra and a calculus? The usual story is that the algebra RA is
More informationMathematical Notation and Symbols
Mathematical Notation and Symbols 1.1 Introduction This introductory block reminds you of important notations and conventions used throughout engineering mathematics. We discuss the arithmetic of numbers,
More informationChapter 1. SigmaAlgebras. 1.1 Definition
Chapter 1 SigmaAlgebras 1.1 Definition Consider a set X. A σ algebra F of subsets of X is a collection F of subsets of X satisfying the following conditions: (a) F (b) if B F then its complement B c is
More informationMincost flow problems and network simplex algorithm
Mincost flow problems and network simplex algorithm The particular structure of some LP problems can be sometimes used for the design of solution techniques more efficient than the simplex algorithm.
More informationAB divides the plane. Then for any α R, 0 < α < π there exists a unique ray. AC in the halfplane H such that m BAC = α. (4) If ray
4. Protractor xiom In this section, we give axioms relevant to measurement of angles. However, before we do it we need to introduce a technical axiom which introduces the notion of a halfplane. 4.. Plane
More informationAREA AND CIRCUMFERENCE OF CIRCLES
AREA AND CIRCUMFERENCE OF CIRCLES Having seen how to compute the area of triangles, parallelograms, and polygons, we are now going to consider curved regions. Areas of polygons can be found by first dissecting
More informationNOTES ON MEASURE THEORY. M. Papadimitrakis Department of Mathematics University of Crete. Autumn of 2004
NOTES ON MEASURE THEORY M. Papadimitrakis Department of Mathematics University of Crete Autumn of 2004 2 Contents 1 σalgebras 7 1.1 σalgebras............................... 7 1.2 Generated σalgebras.........................
More informationSoftware Modeling and Verification
Software Modeling and Verification Alessandro Aldini DiSBeF  Sezione STI University of Urbino Carlo Bo Italy 34 February 2015 Algorithmic verification Correctness problem Is the software/hardware system
More informationPrinciples of AI Planning February 7th, 2007 Nondeterministic planning with partial observability
Principles of AI Planning February 7th, 2007 Nondeterministic planning with partial observability Introduction Reduction to fully observable case Idea Basic translation Caveat Observations Discussion Forward
More informationFormal Languages and Automata Theory  Regular Expressions and Finite Automata 
Formal Languages and Automata Theory  Regular Expressions and Finite Automata  Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March
More informationA Formalization of the Turing Test Evgeny Chutchev
A Formalization of the Turing Test Evgeny Chutchev 1. Introduction The Turing test was described by A. Turing in his paper [4] as follows: An interrogator questions both Turing machine and second participant
More informationPrime Numbers. Chapter Primes and Composites
Chapter 2 Prime Numbers The term factoring or factorization refers to the process of expressing an integer as the product of two or more integers in a nontrivial way, e.g., 42 = 6 7. Prime numbers are
More informationDecision Making under Uncertainty
6.825 Techniques in Artificial Intelligence Decision Making under Uncertainty How to make one decision in the face of uncertainty Lecture 19 1 In the next two lectures, we ll look at the question of how
More informationOn Cuneo s defence of the parity premise
1 On Cuneo s defence of the parity premise G.J.E. Rutten 1. Introduction In his book The Normative Web Terence Cuneo provides a core argument for his moral realism of a paradigmatic sort. Paradigmatic
More informationProblem Set. Problem Set #2. Math 5322, Fall December 3, 2001 ANSWERS
Problem Set Problem Set #2 Math 5322, Fall 2001 December 3, 2001 ANSWERS i Problem 1. [Problem 18, page 32] Let A P(X) be an algebra, A σ the collection of countable unions of sets in A, and A σδ the collection
More informationLecture 10: Probability Propagation in Join Trees
Lecture 10: Probability Propagation in Join Trees In the last lecture we looked at how the variables of a Bayesian network could be grouped together in cliques of dependency, so that probability propagation
More informationfoundations of artificial intelligence acting humanly: Searle s Chinese Room acting humanly: Turing Test
cis20.2 design and implementation of software applications 2 spring 2010 lecture # IV.1 introduction to intelligent systems AI is the science of making machines do things that would require intelligence
More informationPhillips Policy Rules 3.1 A simple textbook macrodynamic model
1 2 3 ( ) ( ) = ( ) + ( ) + ( ) ( ) ( ) ( ) [ ] &( ) = α ( ) ( ) α > 0 4 ( ) ( ) = ( ) 0 < < 1 ( ) = [ ] &( ) = α ( 1) ( ) + + ( ) 0 < < 1 + = 1 5 ( ) ( ) &( ) = β ( ( ) ( )) β > 0 ( ) ( ) ( ) β ( ) =
More informationThis chapter describes set theory, a mathematical theory that underlies all of modern mathematics.
Appendix A Set Theory This chapter describes set theory, a mathematical theory that underlies all of modern mathematics. A.1 Basic Definitions Definition A.1.1. A set is an unordered collection of elements.
More informationSETS, RELATIONS, AND FUNCTIONS
September 27, 2009 and notations Common Universal Subset and Power Set Cardinality Operations A set is a collection or group of objects or elements or members (Cantor 1895). the collection of the four
More information3 σalgebras. Solutions to Problems
3 σalgebras. Solutions to Problems 3.1 3.12 Problem 3.1 (i It is clearly enough to show that A, B A = A B A, because the case of N sets follows from this by induction, the induction step being A 1...
More informationIntroduction to Relations
CHAPTER 7 Introduction to Relations 1. Relations and Their Properties 1.1. Definition of a Relation. Definition: A binary relation from a set A to a set B is a subset R A B. If (a, b) R we say a is related
More informationRecursion Theory in Set Theory
Contemporary Mathematics Recursion Theory in Set Theory Theodore A. Slaman 1. Introduction Our goal is to convince the reader that recursion theoretic knowledge and experience can be successfully applied
More informationApplications of the Central Limit Theorem
Applications of the Central Limit Theorem October 23, 2008 Take home message. I expect you to know all the material in this note. We will get to the Maximum Liklihood Estimate material very soon! 1 Introduction
More informationCommutative Algebra seminar talk
Commutative Algebra seminar talk Reeve Garrett May 22, 2015 1 Ultrafilters Definition 1.1 Given an infinite set W, a collection U of subsets of W which does not contain the empty set and is closed under
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 1 9/3/2008 PROBABILISTIC MODELS AND PROBABILITY MEASURES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 1 9/3/2008 PROBABILISTIC MODELS AND PROBABILITY MEASURES Contents 1. Probabilistic experiments 2. Sample space 3. Discrete probability
More informationFunctional Dependencies (FD)
Example: Functional Dependencies (FD) Lets consider the following relation STUDENT(STUID, STUNAME, MAJOR, CREDITS, STATUS, SSN) Constrains: Each student has a unique SSN and ID. Each student has at most
More information0 ( x) 2 = ( x)( x) = (( 1)x)(( 1)x) = ((( 1)x))( 1))x = ((( 1)(x( 1)))x = ((( 1)( 1))x)x = (1x)x = xx = x 2.
SOLUTION SET FOR THE HOMEWORK PROBLEMS Page 5. Problem 8. Prove that if x and y are real numbers, then xy x + y. Proof. First we prove that if x is a real number, then x 0. The product of two positive
More informationCartesian Products and Relations
Cartesian Products and Relations Definition (Cartesian product) If A and B are sets, the Cartesian product of A and B is the set A B = {(a, b) :(a A) and (b B)}. The following points are worth special
More informationWe now explore a third method of proof: proof by contradiction.
CHAPTER 6 Proof by Contradiction We now explore a third method of proof: proof by contradiction. This method is not limited to proving just conditional statements it can be used to prove any kind of statement
More informationRCalculus: A Logical Framework for Scientific Discovery
RCalculus: A Logical Framework for Scientific Discovery Wei Li State Key Laboratory of Software Development Environment Beihang University August 9, 2013 Outline 1 Motivation 2 Key points of Rcalculus
More informationPropositional Logic and Methods of Inference SEEM
Propositional Logic and Methods of Inference SEEM 5750 1 Logic Knowledge can also be represented by the symbols of logic, which is the study of the rules of exact reasoning. Logic is also of primary importance
More informationLecture 2: March 20, 2015
Automatic Software Verification Spring Semester, 2015 Lecture 2: March 20, 2015 Lecturer: Prof. Mooly Sagiv Scribe: Kalev Alpernas and Elizabeth Firman 2.1 Introduction In this lecture we will go over
More informationEfficiency in Persuasion Dialogues
Efficiency in Persuasion Dialogues Katie Atkinson 1, Priscilla BenchCapon 2 and Trevor BenchCapon 1 1 Department of Computer Science, University of Liverpool, Liverpool, UK 2 The Open University, Milton
More informationMAT2400 Analysis I. A brief introduction to proofs, sets, and functions
MAT2400 Analysis I A brief introduction to proofs, sets, and functions In Analysis I there is a lot of manipulations with sets and functions. It is probably also the first course where you have to take
More informationMathematical Induction
Chapter 2 Mathematical Induction 2.1 First Examples Suppose we want to find a simple formula for the sum of the first n odd numbers: 1 + 3 + 5 +... + (2n 1) = n (2k 1). How might we proceed? The most natural
More information3515ICT Theory of Computation Turing Machines
Griffith University 3515ICT Theory of Computation Turing Machines (Based loosely on slides by Harald Søndergaard of The University of Melbourne) 90 Overview Turing machines: a general model of computation
More informationA Framework for the Semantics of Behavioral Contracts
A Framework for the Semantics of Behavioral Contracts Ashley McNeile Metamaxim Ltd, 48 Brunswick Gardens, London W8 4AN, UK ashley.mcneile@metamaxim.com Abstract. Contracts have proved a powerful concept
More information2. Methods of Proof Types of Proofs. Suppose we wish to prove an implication p q. Here are some strategies we have available to try.
2. METHODS OF PROOF 69 2. Methods of Proof 2.1. Types of Proofs. Suppose we wish to prove an implication p q. Here are some strategies we have available to try. Trivial Proof: If we know q is true then
More informationClassification with Decision Trees
Classification with Decision Trees Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong 1 / 24 Y Tao Classification with Decision Trees In this lecture, we will discuss
More informationMidterm Examination CS540: Introduction to Artificial Intelligence
Midterm Examination CS540: Introduction to Artificial Intelligence October 27, 2010 LAST NAME: SOLUTION FIRST NAME: SECTION (1=Zhu, 2=Dyer): Problem Score Max Score 1 15 2 16 3 7 4 15 5 12 6 10 7 15 8
More informationSelfdirected learning: managing yourself and your working relationships
ASSERTIVENESS AND CONFLICT In this chapter we shall look at two topics in which the ability to be aware of and to manage what is going on within yourself is deeply connected to your ability to interact
More informationTopology and Convergence by: Daniel Glasscock, May 2012
Topology and Convergence by: Daniel Glasscock, May 2012 These notes grew out of a talk I gave at The Ohio State University. The primary reference is [1]. A possible error in the proof of Theorem 1 in [1]
More informationChapter 3. Cartesian Products and Relations. 3.1 Cartesian Products
Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing
More informationLecture 7: Approximation via Randomized Rounding
Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining
More informationFull and Complete Binary Trees
Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full
More informationThe Object Model Overview
The Object Model 3 3.1 Overview The object model provides an organized presentation of object concepts and terminology. It defines a partial model for computation that embodies the key characteristics
More informationChapter 7: Relational Database Design
Chapter 7: Relational Database Design Pitfalls in Relational Database Design Decomposition Normalization Using Functional Dependencies Normalization Using Multivalued Dependencies Normalization Using Join
More informationA set is a Many that allows itself to be thought of as a One. (Georg Cantor)
Chapter 4 Set Theory A set is a Many that allows itself to be thought of as a One. (Georg Cantor) In the previous chapters, we have often encountered sets, for example, prime numbers form a set, domains
More informationApplications of Methods of Proof
CHAPTER 4 Applications of Methods of Proof 1. Set Operations 1.1. Set Operations. The settheoretic operations, intersection, union, and complementation, defined in Chapter 1.1 Introduction to Sets are
More informationData exchange. L. Libkin 1 Data Integration and Exchange
Data exchange Source schema, target schema; need to transfer data between them. A typical scenario: Two organizations have their legacy databases, schemas cannot be changed. Data from one organization
More informationAn uncountably categorical theory whose only computably presentable model is saturated
An uncountably categorical theory whose only computably presentable model is saturated Denis R. Hirschfeldt Department of Mathematics University of Chicago, USA Bakhadyr Khoussainov Department of Computer
More informationChapter 2 Limits Functions and Sequences sequence sequence Example
Chapter Limits In the net few chapters we shall investigate several concepts from calculus, all of which are based on the notion of a limit. In the normal sequence of mathematics courses that students
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationBayesian and Classical Inference
Eco517, Part I Fall 2002 C. Sims Bayesian and Classical Inference Probability statements made in Bayesian and classical approaches to inference often look similar, but they carry different meanings. Because
More information2. Propositional Equivalences
2. PROPOSITIONAL EQUIVALENCES 33 2. Propositional Equivalences 2.1. Tautology/Contradiction/Contingency. Definition 2.1.1. A tautology is a proposition that is always true. Example 2.1.1. p p Definition
More informationCOLORED GRAPHS AND THEIR PROPERTIES
COLORED GRAPHS AND THEIR PROPERTIES BEN STEVENS 1. Introduction This paper is concerned with the upper bound on the chromatic number for graphs of maximum vertex degree under three different sets of coloring
More informationA first step towards modeling semistructured data in hybrid multimodal logic
A first step towards modeling semistructured data in hybrid multimodal logic Nicole Bidoit * Serenella Cerrito ** Virginie Thion * * LRI UMR CNRS 8623, Université Paris 11, Centre d Orsay. ** LaMI UMR
More informationXPath, transitive closure logic, and nested tree walking automata
XPath, transitive closure logic, and nested tree walking automata Balder ten Cate U. of Amsterdam and UC Santa Cruz (visiting IBM Almaden) Luc Segoufin INRIA, ENS Cachan Logic and Algorithms, Edinburgh
More informationDIFFERENTIAL GEOMETRY HW 2
DIFFERENTIAL GEOMETRY HW 2 CLAY SHONKWILER 2 Prove that the only orientationreversing isometries of R 2 are glide reflections, that any two glidereflections through the same distance are conjugate by
More informationThis asserts two sets are equal iff they have the same elements, that is, a set is determined by its elements.
3. Axioms of Set theory Before presenting the axioms of set theory, we first make a few basic comments about the relevant first order logic. We will give a somewhat more detailed discussion later, but
More informationIntroducing Functions
Functions 1 Introducing Functions A function f from a set A to a set B, written f : A B, is a relation f A B such that every element of A is related to one element of B; in logical notation 1. (a, b 1
More informationPartitioning edgecoloured complete graphs into monochromatic cycles and paths
arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edgecoloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political
More informationA Multiagent System for Knowledge Management based on the Implicit Culture Framework
A Multiagent System for Knowledge Management based on the Implicit Culture Framework Enrico Blanzieri Paolo Giorgini Fausto Giunchiglia Claudio Zanoni Department of Information and Communication Technology
More informationMathematical Induction
Mathematical Induction Victor Adamchik Fall of 2005 Lecture 1 (out of three) Plan 1. The Principle of Mathematical Induction 2. Induction Examples The Principle of Mathematical Induction Suppose we have
More informationALGEBRA HANDOUT 2: IDEALS AND QUOTIENTS. 1. Ideals in Commutative Rings In this section all groups and rings will be commutative.
ALGEBRA HANDOUT 2: IDEALS AND QUOTIENTS PETE L. CLARK 1. Ideals in Commutative Rings In this section all groups and rings will be commutative. 1.1. Basic definitions and examples. Let R be a (commutative!)
More informationVerification Theoretical Foundations of Testing
1 / 12 Verification Theoretical Foundations of Testing Miaoqing Huang University of Arkansas Spring 2010 2 / 12 Outline 1 Theoretical Foundations of Testing 3 / 12 Definition I Definition P: the program
More informationTemporal Logic [Syntax and Semantics]
Temporal Logic [ and s] Michael Fisher Department of Computer Science, University of Liverpool, UK [MFisher@liverpool.ac.uk] An Introduction to Practical Formal Methods Using Temporal Logic c Michael Fisher
More informationGRAPH THEORY LECTURE 4: TREES
GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection
More information