An Inquiry Dialogue System

Size: px
Start display at page:

Download "An Inquiry Dialogue System"

Transcription

1 Autonomous Agents and Multi-Agent Systems manuscript No. (will be inserted by the editor) An Inquiry Dialogue System Elizabeth Black Anthony Hunter Received: date / Accepted: date Abstract The majority of existing work on agent dialogues considers negotiation, persuasion or deliberation dialogues; we focus on inquiry dialogues, which allow agents to collaborate in order to find new knowledge. We present a general framework for representing dialogues and give the details necessary to generate two subtypes of inquiry dialogue that we define: argument inquiry dialogues allow two agents to share knowledge to jointly construct arguments; warrant inquiry dialogues allow two agents to share knowledge to jointly construct dialectical trees (essentially a tree with an argument at each node in which a child node is a counter argument to its parent). Existing inquiry dialogue systems only model dialogues, meaning they provide a protocol which dictates what the possible legal next moves are but not which of these moves to make. Our system not only includes a dialogue-game style protocol for each subtype of inquiry dialogue that we present, but also a strategy that selects exactly one of the legal moves to make. We propose a benchmark against which we compare our dialogues, being the arguments that can be constructed from the union of the agents beliefs, and use this to define soundness and completeness properties that we show hold for all inquiry dialogues generated by our system. Keywords agent interaction argumentation inquiry dialogue cooperation 1 Introduction Dialogue games are now a common approach to characterizing argumentation-based agent dialogues (e.g. [33,39,42]). Dialogue games are normally made up of a set of communicative acts called moves, and sets of rules stating: which moves it is legal to make at any point in a dialogue (the protocol); the effect of making a move; and when a dialogue terminates. One attraction of dialogue games is that it is possible E. Black COSSAC: IRC in Cognitive Science and Systems Engineering ( Department of Engineering Science, University of Oxford, Oxford, UK lizblack@robots.ox.ac.uk A. Hunter Department of Computer Science, University College London, London, UK

2 2 to embed games within games, allowing complex conversations made up of nested dialogues of more than one type (e.g. [34,45]). Most of the work so far has looked only at modelling different types of dialogue from the influential Walton and Krabbe typology [49], meaning that they provide a protocol which dictates what the possible legal next moves are but not which one of these legal moves to make. Here we present a generative system, as we not only provide a protocol but also provide a strategy for selecting exactly one of the legal moves to make. Examples of dialogue systems which model each of the five main Walton and Krabbe dialogue types are: information-seeking [30, 40] (where participants aim to share knowledge); inquiry [32, 40] (where participants aim to jointly discover new knowledge); persuasion [4,17] (where participants aim to resolve conflicts of opinion); negotiation [3,30, 35,47] (where participants who need to cooperate aim to agree on a method for doing this that resolves their conflicting interests); and deliberation [29] (where participants aim to jointly decide on a plan of action). Walton and Krabbe classify their dialogue types according to three characteristics: the initial situation from which the dialogue arises; the main goal of the dialogue, to which all the participating agents subscribe; and the personal aims of each individual agent. This article focuses on two subtypes of inquiry dialogue that we define. Walton and Krabbe define an inquiry dialogue as arising from an initial situation of general ignorance and as having the main goal to achieve the growth of knowledge and agreement. Each individual participating in an inquiry dialogue has the goal to find a proof or destroy one [49, page 66]. We have previously proposed a dialogue system [12] for generating a subtype of inquiry dialogue that we call argument inquiry. In an argument inquiry dialogue, the proof that the participating agents are jointly searching for takes the form of an argument for the topic of the dialogue. In this article we adapt the system proposed in [12] to generate a second subtype of inquiry dialogue that we call warrant inquiry. In a warrant inquiry dialogue, the proof that the participating agents are jointly searching for takes the form of a dialectical tree (essentially a tree with an argument at each node, where each child node is a counter argument to its parent and that has at its root an argument whose claim is the topic of the dialogue). Warrant inquiry dialogues are so called as the dialectical tree produced during the dialogue may act as a warrant for the argument at its root. The goal, then, of the participants in an argument inquiry dialogue is to share beliefs in order to jointly construct arguments for a specific claim that none of the individual participants may construct from their own personal beliefs alone; the goal of agents taking part in a warrant inquiry dialogue is to share arguments in order to jointly construct a dialectical tree that none of the individual participants may construct from their own personal beliefs alone. In an argument inquiry dialogue, the agents wish to exchange beliefs in order to jointly construct arguments for a particular claim; however, an argument inquiry dialogue does not allow the agents to determine the acceptability of the arguments constructed (i.e. whether the arguments are ultimately defeated by any other conflicting arguments). In a warrant inquiry dialogue, the agents are interested in determining the acceptability of a particular argument; they do this by jointly constructing a dialectical tree that collects all the arguments that may be relevant to the acceptability of the argument in question. Argument inquiry dialogues are often embedded within warrant inquiry dialogues. Without embedded argument inquiry dialogues, the arguments that can be exchanged within a warrant inquiry dialogue potentially miss out on useful arguments that involve

3 unexpressed beliefs of the other agent. (We have presented the system for generating argument inquiry dialogues previously [12]; we present it again here as it is necessary for generating warrant inquiry dialogues.) The main contribution of this article is a protocol and strategy sufficient to generate sound and complete warrant inquiry dialogues. As far as we are aware, there are only two other groups that have proposed inquiry protocols. Amgoud, Maudet, Parsons and Wooldridge proposed a protocol for general argument inquiry dialogues (e.g. [3, 39]), however this protocol can lead to unsuccessful dialogues in which no argument for the topic is found even when such an argument does exist in the union of the two agents beliefs. In [32], McBurney and Parsons present an inquiry protocol that is similar in spirit to our warrant inquiry protocol in that it allows the agents involved to dialectically reason about the acceptability of an argument given a set of arguments and the counter argument relations between them. Although their protocol allows agents to exchange arguments in order to carry out the dialectical reasoning, it does not allow the agents to jointly construct arguments and so the reasoning that they participate in may be incomplete in the sense that it may miss important arguments that can only be constructed jointly by two or more agents. Neither of these groups have proposed a strategy for use with their inquiry protocol, i.e. their systems model inquiry dialogues but are not sufficient to generate them. A key contribution of this work is that we not only provide a protocol for modelling inquiry dialogues but we also provide a specific strategy to be followed, making this system sufficient to also generate inquiry dialogues. Other works have also considered the generation of dialogues. For example, [44] gives an account of the different factors which must be considered when designing a dialogue strategy. Parsons et al. [39] explore the effect of different agent attitudes, which reduce the set of legal moves from which an agent must choose a move but do not select exactly one of the legal moves to make. Pasquier et al. s cognitive coherence theory [41] addresses the pragmatic issue of dialogue generation, but it is not clear what behaviour this would produce. Both [2] and [31] propose a formalism for representing the private strategy of an agent to which argumentation is then applied to determine the move to be made at a point in a dialogue; however, neither give a specific strategy for inquiry dialogues. Whilst much of the work on argumentation has been intended for use in adversarial domains such as law (e.g. [5,7,10,28,43]), we have been inspired by the cooperative medical domain. In adversarial domains, agents participating in a dialogue are typically concerned with defending their own arguments and defeating the arguments of their opponents; in cooperative domains, agents instead aim to arrive at the best joint outcome, even if this means accepting the defeat of their own arguments. Medical knowledge is typically uncertain and often incomplete and inconsistent, making argumentation an attractive approach for carrying out reasoning and decision making in the medical domain [24]. Inquiry dialogues are a type of dialogue that are of particular use in the medical domain, where it is often the case that people have distinct types of knowledge and so need to interact with others in order to have all the information necessary to make a decision. Another important characteristic of the medical domain is that is safety-critical [23]; if our dialogue system is to be used in such a domain, it is essential that the dialogues our system produces arrive at the appropriate outcome. We wish the outcome of our dialogues to be predetermined by the fixed protocol, the strategy being followed and the belief bases of the participating agents; i.e. given the agents beliefs, we want to know what outcome they will arrive at and that this will be the appropriate outcome. 3

4 4 As discussed in [39], this can be be viewed as a positive or negative feature of a dialogue system depending on the application. In a more competitive environment it may well be the case that one would wish it to be possible for agents to behave in an intelligent manner in order to influence the outcome of a dialogue. However, we want our dialogues to always lead to the ideal outcome. That is to say, we want the dialogues generated by our system to be sound and complete, in relation to some standard benchmark. We compare the outcome of our dialogues with the outcome that would be arrived at by a single agent that has as its beliefs the union of both the agents participating in the dialogue s beliefs. This is, in a sense, the ideal situation, where there are clearly no constraints on the sharing of beliefs. As the dialogue outcome we are aiming for is the same as the outcome we would arrive at if reasoning with the union of the agents beliefs, a natural question to ask is why not simply pool the agents beliefs and then reason with this set? In some situations, it may indeed be more appropriate to pool the agents beliefs (e.g. as part of computer supported collaborative learning, [27]); however, in many real world scenarios, such as within the medical domain, there are often privacy issues that would restrict the agents from simply pooling all beliefs; what we provide here can be viewed as a mechanism for a joint directed search that ensures the agents only share beliefs that could be relevant to the topic of the dialogue. It could also be the case that the belief bases of the agents are so vast that the communication costs involved in pooling all beliefs would be prohibitive. The main contribution of this article is a system for generating sound and complete warrant inquiry dialogues. We build on the system we proposed in [12] in the sense that we use the same underlying formalism for modelling and generating dialogues. However, whilst in [12] we provided only a protocol and strategy for generating sound and complete argument inquiry dialogues, here we also include a protocol and strategy for generating warrant inquiry dialogues and give soundness and completeness results for all inquiry dialogues generated by our system. We have presented the details relating to argument inquiry dialogues (that were previously given in [12]) again in this article as they are necessary for generating warrant inquiry dialogues. The rest of this article proceeds as follows. In Section 2 we present the knowledge representation used by the agents in our system and define how this knowledge can be used to construct arguments, and in Section 3 we present a method for constructing a dialectical tree in order to carry out a dialectical analysis of a set of arguments. Sections 2 and 3 thus present the argumentation system on which this dialogue system operates, which is based on García and Simari s Defeasible Logic Programming (DeLP) [25]. The presentation of the argumentation system here differs only slightly from that in [25] and does not represent a contribution of this work. In Section 4 we define the general framework used to represent dialogues. In Section 5 we give the protocols for modelling both argument inquiry and warrant inquiry dialogues, and also give a strategy for use with these protocols that allows agents to generate the dialogues (completing our inquiry dialogue system). In Section 6 we define soundness and completeness properties for both argument inquiry and warrant inquiry dialogues and show that these properties hold for all well formed inquiry dialogues generated by our system. Finally, in Section 7 we discuss other work related to this and in Section 8 we summarise our conclusions.

5 5 2 Knowledge representation and arguments We adapt García and Simari s Defeasible Logic Programming (DeLP) [25] for representing each agent s beliefs. DeLP is a formalism that combines logic programming with defeasible argumentation. It is intended to allow a single agent to reason internally with inconsistent and incomplete knowledge that may change dynamically over time and has been shown to be applicable in different real-world contexts (e.g. [14, 26]). It provides a warrant procedure, which we will present in Section 3, that applies a dialectical reasoning mechanism to a set of arguments in order to decide whether a particular argument from that set is warranted. The presentation here differs only slightly from that in [25]. García and Simari assume that, as well as a set of defeasible rules, there is a set of strict rules. They also assume that facts are non-defeasible. As we are inspired by the medical domain (in which we know knowledge to often be incomplete, unreliable and inconsistent), we wish all knowledge to be treated as defeasible. We deal with this by assuming the sets of strict rules and facts are empty and by defining a defeasible fact (essentially a defeasible rule with an empty body). We use a restricted set of propositional logic and assume and that a literal is either an atom α or a negated atom α. We use the notation to represent the classical consequence relation; we use to represent classical contradiction; we use α to represent the complement of α, i.e. α = a if and only if α = a, α = a if and only if α = a. Definition 1 A defeasible rule is denoted α 1... α n α 0 where α i is a literal for 0 i n. A defeasible fact is denoted α where α is a literal. The warrant procedure defined in [25] assumes that a formal criterion exists for comparing two arguments. We use a preference ordering across all knowledge, from which a preference ordering across arguments is derived. We imagine that the preference ordering on medical knowledge would depend on the knowledge source [50]; we might assume, for example, that knowledge from an established clinical guideline is preferred to knowledge that has resulted from a small clinical trial. We associate a preference level with a defeasible rule or defeasible fact to form a belief; the lower the preference level the more preferred the belief. Definition 2 A belief is a pair (φ, L) where φ is either a defeasible fact or a defeasible rule, and L {1, 2, 3,...} is a label that denotes the preference level of the belief. The function plevel returns the preference level of the belief: plevel((φ, L)) = L. The set of all beliefs is denoted B. We make a distinction between beliefs in defeasible facts (called state beliefs, as these are beliefs about the state of the world) and beliefs in defeasible rules (called domain beliefs, as these are beliefs about how the domain is expected to behave). We also consider the set of defeasible facts and the set of defeasible rules, and the union of these two sets. Definition 3 A state belief is a belief (φ, L) where φ is a defeasible fact. The set of all state beliefs is denoted S. A domain belief is a belief (φ, L) where φ is a defeasible rule. The set of all domain beliefs is denoted R. The set of all defeasible facts is denoted S = {φ (φ, L) S}. The set of all defeasible rules is denoted R = {φ (φ, L) R}. The set of all defeasible rules and all defeasible facts is denoted B = {φ (φ, L) B} = S R.

6 6 We assume that there are always exactly two agents (participants) taking part in a dialogue, each with its own identifier taken from the set I = {1, 2}. Although we have restricted the number of participants to two here for the sake of simplicity, we believe it is straightforward to adapt the system to allow multiple participants. Many of the difficult issues associated with multi-party dialogues (e.g. What are the agents roles? How to manage turn taking? Who should be addressed with each move? [18]) can be easily overcome here due to the collaborative and exhaustive nature of the dialogues we are considering (e.g. all agents have the same role; each agent is assigned a place in a sequence and that sequence is followed for turn taking; each move made is broadcast to all the participants). We are currently working on multi-party dialogues that use the framework presented here. Each agent has a, possibly inconsistent, belief base. Definition 4 A belief base associated with an agent x is a finite set, denoted Σ x, such that Σ x B and x I = {1, 2}. Example 1 Consider the following belief base associated with agent 1. { } Σ 1 (a, 1), ( a, 1), (b, 2), (d, 1), = (a c, 3), (b c, 2), (d b, 1), ( a b, 1) The top four elements are state beliefs and we can see that the agent conflictingly believes strongly in both a and a. The bottom four elements are all domain beliefs. plevel((a, 1)) = 1. plevel(( a, 1)) = 1. plevel((b, 2)) = 2. plevel((d, 1)) = 1. plevel((a c, 3)) = 3. plevel((b c, 2)) = 2. plevel((d b, 1)) = 1. plevel(( a b, 1)) = 1. Recall that the lower the plevel value, the more preferred the belief. We now define what constitutes a defeasible derivation. This has been adapted slightly from [25] in order to deal with our definition of a belief. Definition 5 Let Ψ be a set of beliefs and α a defeasible fact. A defeasible derivation of α from Ψ, denoted Ψ α, is a finite sequence α 1, α 2,..., α n of literals such that α n is the defeasible fact α and each literal α m (1 m n) is in the sequence because: (α m, L) is a state belief in Ψ, or (β 1... β j α m, L ) Ψ s.t. every literal β i (1 i j) is an element α k preceding α m in the sequence (k < m). The function DefDerivations : (B) S returns the set of literals that can be defeasibly derived from a set of beliefs Ψ such that DefDerivations(Ψ) = {φ there exists Φ Ψ such that Φ φ}. Example 2 If we continue with the running example started in Example 1 we see that the following defeasible derivations exist from Σ 1. a. a. b. d. a, c. b, c. d, b. a, b. We now define an argument as being a minimally consistent set from which the claim can be defeasibly derived. Definition 6 An argument A constructed from a set of, possibly inconsistent, beliefs Ψ (Ψ B) is a tuple Φ, φ where φ is a defeasible fact and Φ is a set of beliefs such that:

7 7 1. Φ Ψ, 2. Φ φ, 3. φ, φ s.t. Φ φ and Φ φ, it is not the case that φ φ, 4. there is no subset of Φ that satisfies (1-3). Φ is called the support of the argument and is denoted Support(A); φ is called the claim of the argument and is denoted Claim(A). For two arguments A 1 and A 2, A 1 is a subargument of A 2 iff Support(A 1 ) Support(A 2 ). The set of all arguments that can be constructed from a set of beliefs Ψ is denoted A(Ψ). Example 3 Continuing the running example, the following arguments can be constructed by the agent. a 1 = {(a, 1)}, a, a 5 = {(a, 1), (a c, 3)}, c, A(Σ 1 a ) = 2 = {( a, 1)}, a, a 6 = {(b, 2), (b c, 2)}, c, a 3 = {(b, 2)}, b, a 7 = {(d, 1), (d b, 1)}, b, a 4 = {(d, 1)}, d, a 8 = {( a, 1), ( a b, 1)}, b } Note, a 1 is a subargument of a 5, a 2 is a subargument of a 8, a 3 is a subargument of a 6 and a 4 is a subargument of a 7. Every argument is a subargument of itself. As Ψ may be inconsistent, there may be conflicts between arguments within A(Ψ). Definition 7 Let A 1 and A 2 be two arguments. A 1 is in conflict with A 2 iff Claim(A 1 ) Claim(A 2 ) (i.e. Claim(A 1 ) = Claim(A 2 ), as the claim of an argument is always a literal). Example 4 Continuing the running example, a 1 is in conflict with a 2, a 3 is in conflict with a 7, a 3 is in conflict with a 8, and a 5 is in conflict with a 6. Note that, as we are using to represent classical implication, two arguments are in conflict with one another if and only if their claims are the complement of one another. For certain applications, we may need a different definition of conflict between two arguments, depending on the meaning of the arguments claims and the purpose of the argumentation process; for example, if the purpose of the argumentation is to decide between alternative actions to try to achieve a goal, we may want to define two arguments as being in conflict if their claims are two distinct actions. We now define the attack relationship between arguments. Definition 8 Let A 1, A 2 and A 3 be arguments such that A 3 is a subargument of A 2. A 1 attacks A 2 at subargument A 3 iff A 1 is in conflict with A 3. Note that in the previous definition the subargument A 3 is unique, as we will now show. A 1 and A 3 conflict, and so the claim of A 1 is the negation of the claim of A 3. Let us assume that there is another disagreement sub-argument such that A 1 attacks A 2 at A 4, A 1 and A 4 conflict, and so the claim of A 1 is the negation of the claim of A 4. As A 1 is a literal, this means that the claim of A 4 is the same as the claim of A 3. As an argument is minimal, this means that A 3 and A 4 must be the same arguments. Example 5 Continuing the running example: a 1 attacks a 2 at a 2, a 1 attacks a 8 at a 2, a 2 attacks a 1 at a 1, a 2 attacks a 5 at a 1, a 3 attacks a 7 at a 7, a 3 attacks a 8 at a 8, a 5 attacks a 6 at a 6, a 6 attacks a 5 at a 5, a 7 attacks a 3 at a 3, a 7 attacks a 6 at a 3, a 8 attacks a 3 at a 3, and a 8 attacks a 6 at a 3.

8 8 Given that we know one argument attacks another, we need a mechanism for deciding whether the attacking argument successfully defeats the argument being attacked or not. We base this on the preference level of the argument, which is equal to that of the least preferred belief used in its support. Definition 9 Let A be an argument. The preference level of A, denoted plevel(a), is equal to plevel(φ) such that: 1. φ Support(A), 2. φ Support(A), plevel(φ ) plevel(φ). Given that we have an argument A 1 that attacks an argument A 2 at subargument A 3, we say that A 1 defeats A 2 if the preference level of A 1 is the same as or less (meaning more preferred) than the preference level of A 3. If it is the same, then A 1 is a blocking defeater for A 2 ; if it is less, then A 1 is a proper defeater for A 2. Definition 10 Let A 1, A 2 and A 3 be arguments such that A 3 is a subargument of A 2 and A 1 attacks A 2 at subargument A 3. A 1 is a proper defeater for A 2 iff plevel(a 1 ) < plevel(a 3 ). A 1 is a blocking defeater for A 2 iff plevel(a 1 ) = plevel(a 3 ). Example 6 Continuing the running example: a 1 is a blocking defeater for a 2, a 1 is a blocking defeater for a 8, a 2 is a blocking defeater for a 1, a 2 is a blocking defeater for a 5, a 6 is a proper defeater for a 5, a 7 is a proper defeater for a 3, a 7 is a proper defeater for a 6, a 8 is a proper defeater for a 3 and a 8 is a proper defeater for a 6. In this section we have proposed a criterion for deciding whether an argument A 1 that attacks an argument A 2 defeats it or not. In the next section we introduce the warrant procedure from [25], which allows an agent to decide whether, given a set of interacting arguments, a particular argument from this set is ultimately defeated or undefeated. 3 Dialectical analysis of arguments Given a set of beliefs Ψ and an argument A 1 A(Ψ), in order to know whether A 1 is defeated or not, an agent has to consider each argument from A(Ψ) that attacks A 1 and decide whether or not it defeats it. However, a defeater of A 1 may itself be defeated by another argument A 2 A(Ψ). Defeaters may also exist for A 2, themselves of which may also have defeaters. Therefore, in order to decide whether A 1 is defeated, an agent has to consider all defeaters for A 1, all of the defeaters for those defeaters, and so on. Following [25], it does so by constructing a dialectical tree, where each node is labelled with an argument, the root node is labelled A 1, and the arcs represent the defeat relation between arguments. Each path through the dialectical tree from root node to a leaf represents an argumentation line, where each argument in such a path defeats its predecessor. García and Simari impose some extra constraints on what is an acceptable argumentation line. This is because they wish to ensure that their system avoids such things as circular argumentation and that it imposes properties such as concordance between supporting or interfering arguments. For more information on acceptable argumentation lines and their motivation the reader should refer to [25].

9 9 Definition 11 If Λ = [ Φ 0, φ 0, Φ 1, φ 1, Φ 2, φ 2,...] is a sequence of arguments such that each element of the sequence Φ i, φ i is a defeater (proper or blocking) of its predecessor Φ i 1, φ i 1, then Λ is an argumentation line. Λ is an acceptable argumentation line iff 1. Λ is a finite sequence, 2. Φ 0 Φ 2 Φ 4... and Φ 1 Φ 3 Φ 5..., 3. no argument Φ k, φ k appearing in Λ is a subargument of an argument A j that appears earlier in Λ (j < k), 4. i s.t. Φ i, φ i is a blocking defeater for Φ i 1, φ i 1, if Φ i+1, φ i+1 exists, then Φ i+1, φ i+1 is a proper defeater for Φ i, φ i. Example 7 Continuing the running example, the following are all examples of argumentation lines, however only Λ 2 is an acceptable argumentation line. Λ 1 is not acceptable as it breaks constraints (3) and (4), whereas Λ 3 is not acceptable as it breaks constraint (2). Λ 1 = [a 5, a 2, a 1, a 2, a 1, a 2 ] Λ 2 = [a 5, a 6, a 7 ] Λ 3 = [a 5, a 6, a 8 ] In order to determine whether the claim of an argument A 0 is warranted given a set of beliefs Ψ, the agent must consider every acceptable argumentation line that it can construct from A(Ψ) which starts with A 0. It does this by constructing a dialectical tree. Definition 12 Let Ψ be a, possibly inconsistent, belief base and A 0 be an argument such that A 0 A(Ψ). A dialectical tree for A 0 constructed from Ψ, denoted T(A 0, Ψ), is defined as follows. 1. The root of the tree is labelled with A Let N be a node of the tree labelled A n and let Λ i = [A 0,..., A n] be the sequence of labels on the path from the root to node N. Let arguments B 1, B 2,..., B k be all the defeaters for A n that can be formed from Ψ. For each defeater B j (1 j k), if the argumentation line Λ i = [A 0,..., A n, B j ] is an acceptable argumentation line, then the node N has a child N j that is labelled B j. If there is no defeater for A n or there is no B j such that Λ i is acceptable, then N is a leaf node. To ensure that the construction of a dialectical tree is a finite process we must show that it is not possible to construct an infinite argumentation line that meets conditions 2-4 of an acceptable argumentation line (Definition 11). Fortunately, we can show that if condition 3 holds (subarguments of arguments that appear earlier in an argumentation line cannot be repeated) and the arguments in the argumentation line come from a finite set (as they do in the construction of a dialectical tree), then the argumentation line is finite.

10 10 Proposition 1 Let Λ = [A 0, A 1, A 2,...] be an argumentation line such that, for any A i appearing in Λ, A i A(Φ) where Φ is a finite set of beliefs. If no argument A k appearing in Λ is a subargument of an argument A j that appears earlier in Λ (j < k), then Λ is a finite sequence of arguments. Proof: Since Φ is a finite set, it follows from the definition of an argument (Def. 6) that A(Φ) is also a finite set. Since the arguments from the sequence Λ come from this finite set, and since we are constrained that no argument in Λ can be repeated, it follows that Λ is a finite sequence. From the above proposition we see that, although we may go on constructing a dialectical tree indefinitely and never know if a branch violated condition 1 of an acceptable argumentation line, by meeting condition 3 of an acceptable argumentation line we ensure that this will never be the case. Note that the root node of a dialectical tree T is denoted Root(T). Also note, we define two dialectical trees as being equal if and only if, if a sequence of labels appears from the root node to a node in one tree then it also appears as a sequence of labels from the root node to a node in the other. Our definition of dialectical tree equality takes into account the labels of all nodes that appear in the trees (as opposed to, for example, Chesñevar et al. s definition of isomorphic dialectical trees [15], where blocking and proper defeaters are distinguished but no general constraint is made on the arguments that appear at the nodes of the trees). We have taken this approach as we will later show that the dialectical tree constructed by two agents during a warrant inquiry dialogue is equal to the dialectical tree that could be constructed from the union of their beliefs. Definition 13 The dialectical trees T 1 and T 2 are equal to one another iff 1. the root of T 1 is labelled with A 0 iff the root of T 2 is labelled with A 0, 2. if N 1 is a node in T 1 and [A 0,..., A n] is the sequence of labels on the path from the root of T 1 to N 1, then there is an N 2 s.t. N 2 is a node in T 2 and [A 0,..., A n] is the sequence of labels on the path from the root of T 2 to N 2, 3. if N 2 is a node in T 2 and [A 0,..., A n] is the sequence of labels on the path from the root of T 2 to N 2, then there is an N 1 s.t. N 1 is a node in T 1 and [A 0,..., A n] is the sequence of labels on the path from the root of T 1 to N 1. Following from [25], in order to determine whether the root of a dialectical tree is undefeated or not, we have to recursively mark each node in the tree as D (defeated) or U (undefeated), dependent on whether it has any undefeated child nodes that are able to defeat it. Definition 14 Let T(A, Ψ) be a dialectical tree. The corresponding marked dialectical tree of T(A, Ψ) is obtained by marking every node in T(A, Ψ) as follows. 1. All leaves in T(A, Ψ) are marked U. 2. If N is a node of T(A, Ψ) and N is not a leaf node, then N will be marked U iff every child of N is marked D. The node N will be marked D iff it has at least one child marked U. Example 8 Following the running example, the corresponding marked dialectical tree of T(a 5, Σ 1 ) is shown in Figure 1. Note that the arguments a 1 and a 8 do not appear in

11 11 {(a, 1), (a c, 3)}, c D {( a, 1)}, a U {b, 2), (b c, 2)}, c D {(d, 1), (d b, 1)}, b U Fig. 1 A marked dialectical tree, T(a 5, Σ 1 ). the tree even though they are defeaters of arguments that do appear in the tree, this is because their inclusion would break conditions of an acceptable argumentation line (Definition 11); the inclusion of a 1 would break conditions 3 and 4, the inclusion of a 8 would break condition 2. The function Status takes an argument A and a set of beliefs Ψ. If the root node of the dialectical tree that has A at its root and is constructed from Ψ is marked U, then Status returns U; if it is marked D, then Status returns D. Definition 15 The status of an argument A given a set of beliefs Ψ is returned by the function Status : A(B) (B) {U, D} such that Status(A, Ψ) = U iff Root(T(A, Ψ)) is marked U in the corresponding marked dialectical tree of T(A, Ψ), and Status(A, Ψ) = D iff Root(T(A, Ψ)) is marked with D in the corresponding marked dialectical tree of T(A, Ψ). The claim of an argument is warranted by the belief base if and only if the status of the root of the associated dialectical tree is U. Example 9 Following the running example, as the root node of the the tree T(a 5, Σ 1 ) is marked D (shown in Figure 1), and hence Status(a 5, Σ 1 ) = D, we see that the argument a 5 is not warranted by the belief base Σ 1. This warrant procedure makes it possible for a single agent to reason with incomplete, inconsistent and uncertain knowledge. In the following two sections we propose a dialogue system that allows two agents to use this warrant procedure to jointly reason with their beliefs. 4 Representing dialogues In this section we provide a general framework for representing dialogues. We define how dialogues may be nested one within another, what it means for a dialogue to terminate, and what we mean when we refer to the current dialogue. The framework given in this section is not specific to warrant inquiry and argument inquiry dialogues (the details of which will not come until the following section) and it can be used to represent dialogues of other types (e.g. [13]). The communicative acts in a dialogue are called moves. We assume that there are always exactly two agents (participants) taking part in a dialogue, each with its own

12 12 Move open assert close Format x, open, dialogue(θ, γ) x, assert, Φ, φ x, close, dialogue(θ, γ) Table 1 The format for moves used in warrant inquiry and argument inquiry dialogues, where x I, Φ, φ is an argument, and either θ = wi (for warrant inquiry) and γ S (i.e. γ is a defeasible fact), or θ = ai (for argument inquiry) and γ R (i.e. γ is a defeasible rule). identifier taken from the set I = {1, 2}. Each participant takes it in turn to make a move to the other participant. For a dialogue involving participants 1, 2 I, we also refer to participants using the variables x and ˆx such that if x is 1 then ˆx is 2 and if x is 2 then ˆx is 1. A move in our framework is of the form Agent, Act, Content. Agent is the identifier of the agent generating the move, Act is the type of move, and the Content gives the details of the move. The format for moves used in warrant inquiry and argument inquiry dialogues is shown in Table 1, and the set of all moves meeting the format defined in Table 1 is denoted M. Note that the framework allows for other types of dialogues to be generated and these might require the addition of extra moves (e.g. such as those suggested in [13]). Also, Sender : M I is a function such that Sender( Agent, Act, Content ) = Agent. A dialogue is simply a sequence of moves, each of which is made from one participant to the other. As a dialogue progresses over time, we denote each timepoint by a natural number (N = {1, 2, 3,...}). Each move is indexed by the timepoint when the move was made. Exactly one move is made at each timepoint. The dialogue itself is indexed with two timepoints, indexing the first and last moves of the dialogue. Definition 16 A dialogue, denoted D t r, is a sequence of moves [m r,..., m t ] involving two participants in I = {1, 2}, where r, t N and r t, such that: 1. the first move of the dialogue, m r, is a move of the form x, open, dialogue(θ, γ), 2. Sender(m s) I (r s t), 3. Sender(m s) Sender(m s+1 ) (r s < t). The type of the dialogue D t r is returned by Type(D t r) such that Type(D t r) = θ (i.e. the type of the dialogue is determined by the content of the first move made). The topic of the dialogue D t r is returned by Topic(D t r) such that Topic(D t r) = γ (i.e. the topic of the dialogue is determined by the content of the first move made). The set of all dialogues is denoted D. The first move of a dialogue D t r must always be an open move (condition 1 of the previous definition), every move of the dialogue must be made to a participant of the dialogue (condition 2), and the agents take it in turns to make moves (condition 3). The type and the topic of a dialogue are determined by the content of the first move made; if the first move made in a dialogue is x, open, dialogue(θ, γ), then the type of the dialogue is θ and the topic of the dialogue is γ. In this article, we consider two different types of dialogue (i.e. two different values for θ): wi (for warrant inquiry) and ai (for argument inquiry). If a dialogue is a warrant inquiry dialogue, then its topic must be a defeasible fact; if a dialogue is an argument inquiry dialogue, then its topic must be a defeasible rule; these are requirements of the format of open moves defined in Table 1. Although we consider only warrant inquiry and argument inquiry dialogues

13 13 here, our definition of a dialogue is general so as to allow dialogues of other types to be considered within our framework. We now define some terminology that allows us to talk about the relationship between two dialogues. Definition 17 Let Dr t and Dr t1 1 be two dialogues. Dr t1 1 is a sub-dialogue of Dr t iff Dr t1 1 is a sub-sequence of Dr t (r < r 1 t 1 t). Dr t is a top-level dialogue iff r = 1; the set of all top-level dialogues is denoted D top. D1 t is a top-dialogue of Dr t iff either the sequence D1 t is the same as the sequence Dr t or Dr t is a sub-dialogue of D1. t If Dr t is a sequence of n moves, Dr t2 Dr. t extends D t r iff the first n moves of D t2 r are the sequence In order to terminate a dialogue, two close moves must appear next to each other in the sequence (called a matched-close); this means that each participating agent must agree to the termination of the dialogue. A close move is used to indicate that an agent wishes to terminate the dialogue; it may be the case, however, that the other agent still has something it wishes to say, which may in turn cause the original agent to change its mind about wishing to terminate the dialogue. Definition 18 Let D t r be a dialogue of type θ {wi, ai} with participants I = {1, 2} such that Topic(D t r) = γ. We say that m s (r < s t) is a matched-close for D t r iff m s 1 = x, close, dialogue(θ, γ) and m s = ˆx, close, dialogue(θ, γ). So a matched-close will terminate a dialogue D t r but only if D t r has not already terminated and any sub-dialogues that are embedded within D t r have already terminated; this notion will be needed later on to define well-formed inquiry dialogues. Definition 19 Let D t r be a dialogue. D t r terminates at t iff the following conditions hold: 1. m t is a matched-close for Dr, t 2. Dr t1 s.t. Dr t1 terminates at t 1 and Dr t extends Dr t1, 3. Dr t1 1 if Dr t1 1 is a sub-dialogue of Dr, t then Dr t2 1 s.t. Dr t2 1 terminates at t 2 and either Dr t2 1 and D t2 extends Dr t1 1 r 1 is a sub-dialogue of Dr. t or D t1 r 1 extends D t2 r 1, As we are often dealing with multiple nested dialogues it is often useful to refer to the current dialogue, which is the innermost dialogue that has not yet terminated. As dialogues of one type may be nested within dialogues of another type, an agent must refer to the current dialogue in order to know which protocol to follow. Definition 20 Let Dr t be a dialogue. The current dialogue is given by Current(Dr) t such that Current(Dr) t = Dr t 1 (1 r r 1 t) where the following conditions hold: 1. m r1 = x, open, dialogue(θ, γ) for some x I, some γ B and some θ {wi, ai}, 2. Dr t1 2 if Dr t1 2 is a sub-dialogue of Dr t 1, then Dr t2 2 s.t. either Dr t2 2 extends Dr t1 2 or Dr t1 2 extends Dr t2 2, and Dr t2 2 is a sub-dialogue of Dr t 1 and Dr t2 2 terminates at t 2,

14 14 3. Dr t3 1 s.t. Dr t 1 extends Dr t3 1 and Dr t3 1 terminates at t 3. If the above conditions do not hold then Current(D t r) = null. The topic of the current dialogue is returned by the function ctopic(d t r) such that ctopic(d t r) = Topic(Current(D t r)). The type of the current dialogue is returned by the function ctype(d t r) such that ctype(d t r) = Type(Current(D t r)). We now give a schematic example of nested dialogues. Example 10 An example of nested dialogues is shown in Figure 2. In this example: Current(D1) t = D1 t Current(D1 k 1 ) = Dj k 1 Current(Di t Current(Di k ) = Dk i Current(Di k 1 Current(D1 t 1 ) = Di t 1 Current(D k 1 j Current(D1 k ) = Di k ) = Di t 1 Current(Dj k ) = null ) = null Current(Dt 1 i ) = D k 1 j ) = Dj k 1 We have now defined our general framework for representing dialogues, in the following section we give the details needed to generate argument inquiry and warrant inquiry dialogues. 5 Generating dialogues In this section we give the details specific to argument inquiry and warrant inquiry dialogues that, along with the general framework given in the previous section, comprise our inquiry dialogue system. In Sections 5.1 and 5.2 we give the protocols needed to model legal argument inquiry and warrant inquiry dialogues, we define what a wellformed argument inquiry dialogue is and what a well-formed warrant inquiry dialogue is (a dialogue that terminates and whose moves are legal according to the relevant protocol), and we define what the outcomes of the two dialogue types are. In Section 5.3 we give the details of a strategy that can be used to generate legal argument inquiry and warrant inquiry dialogues (i.e. that allows an agent to select exactly one of the legal moves to make at any point in the dialogue). We adopt the common approach of associating a commitment store with each agent participating in a dialogue (e.g. [34,39]). A commitment store is a set of beliefs that the agent is publicly committed to as the current point of the dialogue (i.e. that they have asserted). As a commitment store consists of things that the agent has already publicly declared, its contents are visible to the other agent participating in the dialogue. For this reason, when constructing an argument, an agent may make use of not only its own beliefs but also those from the other agent s commitment store. Definition 21 A commitment store is a set of beliefs denoted CS t x (i.e. CS t x B), where x I is an agent and t N is a timepoint. When an agent enters into a top-level dialogue of any kind a commitment store is created and persists until that dialogue has terminated (i.e. this same commitment store is used for any sub-dialogues of the top-level dialogue). If an agent makes a move asserting an argument, every element of the support is added to the agent s commitment store. This is the only time the commitment store is updated.

15 15 D t 1 [m 1,..., D t i m i,...,..., m t 1, m t] D k j m j,..., m k 1, m k 1 < i < j < k < t 1 D1 t = [m 1,..., m i,..., m j,..., m k, m k+1,..., m t 1, m t] m 1 = P 1, open, dialogue(θ 1, φ 1 ) m i = P i, open, dialogue(θ i, φ i ) m j = P j, open, dialogue(θ j, φ j ) m k 1 = P k 1, close, dialogue(θ j, φ j ) m k = P k, close, dialogue(θ j, φ j ) m t 1 = P t 1, close, dialogue(θ i, φ i ) m t = P t, close, dialogue(θ i, φ i ) Fig. 2 Nested dialogues. D1 t is a top level dialogue that has not yet terminated. Dt i is a subdialogue of D1 t that terminates at t. Dk j is a sub-dialogue of both Dt 1 and Dt i, that terminates at k. D t 1 is a top-dialogue of Dt 1. Dk 1 is a top-dialogue of Dk j. Dt 1 is a top-dialogue of Dt i. Dk 1 is a top-dialogue of D k 1. Definition 22 Commitment store update. Let the current dialogue be Dr t with participants I = {1, 2}. iff t = 0, CSx t = CSx t 1 Φ iff m t = ˆx, assert, Φ, φ, CSx t 1 otherwise. The only move used in argument inquiry or warrant inquiry dialogues that affects an agent s commitment store is the assert move, which causes the support of the argument being asserted to be added to the commitment store; hence the commitment store of an agent participating in an argument inquiry or warrant inquiry dialogue grows monotonically over time. If we were to define more moves in order to allow our system to generate other dialogue types, it would be necessary to define the effect of those moves on an agent s commitment store. For example, a retract move (that causes a belief to be removed from the commitment store) may be necessary for a negotiation dialogue, in which case the commitment store of an agent participating in a negotiation dialogue will not necessarily grow monotonically over time. In the following subsection we give the details needed to allow us to model wellformed argument inquiry dialogues.

16 Modelling argument inquiry dialogues An argument inquiry dialogue is initiated when an agent wants to construct an argument for a certain claim, let us say φ, that it cannot construct alone. If the agent knows of a domain belief whose consequent is that claim, let us say (α 1... α n φ, L), then the agent will open an argument inquiry dialogue with α 1... α n φ as its topic. If, between them, the two participating agents could provide arguments for each of the elements α i (1 i n) in the antecedent of the topic, then it would be possible for an argument for φ to be constructed. We define the query store as the set of literals that could help construct an argument for the consequent of the topic of an argument inquiry dialogue: when an argument inquiry dialogue with topic α 1... α n β is opened, a query store associated with that dialogue is created whose contents are {α 1,..., α n, β}. Throughout the dialogue the participating agents will both try to provide arguments for the literals in the query store. This may lead them to open further nested argument inquiry dialogues that have as a topic a rule whose consequent is a literal in the current query store. Definition 23 For a dialogue Dr t with participants I = {1, 2}, a query store, denoted QS r, is a finite set of literals such that { {α1,..., α n, β} iff m r = x, open, dialogue(ai, α QS r = 1... α n β) otherwise. The query store of the current dialogue is given by cqs(d t r) such that cqs(d t r) = QS r1 iff Current(D t r) = D t r 1. A query store is fixed and is determined by the open move that opens the associated argument inquiry dialogue. If the current dialogue is an argument inquiry dialogue, then the agents will consult the query store of the current dialogue in order to determine what arguments it might be helpful to try and construct (i.e. those whose claim is a member of the current query store). A protocol is a function that returns the set of moves that are legal for an agent to make at a particular point in a particular type of dialogue. Here we give the specific protocol for argument inquiry dialogues. It takes the top-level dialogue that the agents are participating in and returns the set of legal moves that the agent may make. Definition 24 The argument inquiry protocol is a function Π ai : D top (M). If D t 1 is a top-level dialogue with participants I = {1, 2} such that Sender(m t ) = ˆx, 1 t and ctopic(d t 1) = γ, then Π ai (D t 1) is where Π assert ai (D1) t Π open ai (D1) t { x, close, dialogue(ai, γ) } Πai assert (D1) t = { x, assert, Φ, φ (1) φ cqs(d1), t (2) t s.t. 1 < t t and m t = x, assert, Φ, φ and x I} Π open ai (D1) t = { x, open, dialogue(ai, β 1... β n α) (1) α cqs(d1), t (2) t s.t. 1 < t t and m t = x, open, β 1... β n α and x I}

17 17 The argument inquiry protocol tells us that an agent may always legally make a close move, indicating that it wishes to terminate the dialogue (recall, however, that the dialogue will only terminate when both agents indicate they wish to terminate the dialogue, leading to a matched-close). An agent may legally assert an argument that has not previously been asserted, as long as its claim is in the current query store (and so may help the agents in finding an argument for the consequent of the topic of the dialogue). An agent may legally open a new, embedded argument inquiry dialogue providing an argument inquiry dialogue with the same topic has not previously been opened and the of the topic of the new argument inquiry dialogue is a defeasible rule that has a member of the current query store as its consequent (and so any arguments successfully found in this new argument inquiry dialogue may help the agents in finding an argument for the consequent of the topic of the current dialogue). It is straightforward to check conformance with the protocol as it only refers to public elements of the dialogue. We are now able to define a well-formed argument inquiry dialogue. This is a dialogue that starts with a move opening an argument inquiry dialogue (condition 1 of the following definition), that has a continuation which terminates (condition 2) and whose moves conform to the argument inquiry protocol (condition 3). Definition 25 Let D t r be a dialogue with participants I = {1, 2}. D t r is a wellformed argument inquiry dialogue iff the following conditions hold: 1. m r = x, open, dialogue(ai, γ) where x I and γ R (i.e. γ is a defeasible rule), 2. t s.t. t t, Dr t extends Dr, t and Dr t terminates at t, 3. s s.t. r s < t and Dr t extends Dr, s if D1 t is a top-dialogue of Dr t and D1 s is a top-dialogue of Dr s and D1 t extends D1 s and Sender(m s) = x (where x I), then m s+1 Π ai (D1, s ˆx ) (where ˆx I, ˆx x ). The set of all well-formed argument inquiry dialogues is denoted D ai. We define the outcome of an argument inquiry dialogue as the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the query store. Definition 26 The argument inquiry outcome of a dialogue is given by a function Outcome ai : D ai (A(B)) {}. If D t r is a well-formed argument inquiry dialogue with participants I = {1, 2}, then Outcome ai (D t r) = { Φ, φ A(CS t 1 CS t 2) φ QS r} We are now able to model well-formed argument inquiry dialogues and determine their outcome. In the following subsection we give the details that we need to model well-formed warrant inquiry dialogues. 5.2 Modelling warrant inquiry dialogues The goal of a warrant inquiry dialogue is to jointly construct a dialectical tree whose root is an argument for the topic of the dialogue; the topic of the dialogue is warranted

Testing LTL Formula Translation into Büchi Automata

Testing LTL Formula Translation into Büchi Automata Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland

More information

The mechanics of some formal inter-agent dialogues

The mechanics of some formal inter-agent dialogues The mechanics of some formal inter-agent dialogues Simon Parsons 1,2, Peter McBurney 2, Michael Wooldridge 2 1 Department of Computer and Information Science, Brooklyn College, City University of New York,

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

More information

Relational Databases

Relational Databases Relational Databases Jan Chomicki University at Buffalo Jan Chomicki () Relational databases 1 / 18 Relational data model Domain domain: predefined set of atomic values: integers, strings,... every attribute

More information

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March

More information

Embedding Defeasible Argumentation in the Semantic Web: an ontology-based approach

Embedding Defeasible Argumentation in the Semantic Web: an ontology-based approach Embedding Defeasible Argumentation in the Semantic Web: an ontology-based approach Sergio Alejandro Gómez 1 Carlos Iván Chesñevar Guillermo Ricardo Simari 1 Laboratorio de Investigación y Desarrollo en

More information

Decision Making under Uncertainty

Decision Making under Uncertainty 6.825 Techniques in Artificial Intelligence Decision Making under Uncertainty How to make one decision in the face of uncertainty Lecture 19 1 In the next two lectures, we ll look at the question of how

More information

CHAPTER 7 GENERAL PROOF SYSTEMS

CHAPTER 7 GENERAL PROOF SYSTEMS CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes

More information

Cartesian Products and Relations

Cartesian Products and Relations Cartesian Products and Relations Definition (Cartesian product) If A and B are sets, the Cartesian product of A and B is the set A B = {(a, b) :(a A) and (b B)}. The following points are worth special

More information

Assuring Safety in an Air Traffic Control System with Defeasible Logic Programming

Assuring Safety in an Air Traffic Control System with Defeasible Logic Programming Assuring Safety in an Air Traffic Control System with Defeasible Logic Programming Sergio Alejandro Gómez, Anca Goron, and Adrian Groza Artificial Intelligence Research and Development Laboratory (LIDIA)

More information

A Framework for the Semantics of Behavioral Contracts

A Framework for the Semantics of Behavioral Contracts A Framework for the Semantics of Behavioral Contracts Ashley McNeile Metamaxim Ltd, 48 Brunswick Gardens, London W8 4AN, UK ashley.mcneile@metamaxim.com Abstract. Contracts have proved a powerful concept

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Self-directed learning: managing yourself and your working relationships

Self-directed learning: managing yourself and your working relationships ASSERTIVENESS AND CONFLICT In this chapter we shall look at two topics in which the ability to be aware of and to manage what is going on within yourself is deeply connected to your ability to interact

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Regular Expressions and Automata using Haskell

Regular Expressions and Automata using Haskell Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions

More information

Appendix B Data Quality Dimensions

Appendix B Data Quality Dimensions Appendix B Data Quality Dimensions Purpose Dimensions of data quality are fundamental to understanding how to improve data. This appendix summarizes, in chronological order of publication, three foundational

More information

This asserts two sets are equal iff they have the same elements, that is, a set is determined by its elements.

This asserts two sets are equal iff they have the same elements, that is, a set is determined by its elements. 3. Axioms of Set theory Before presenting the axioms of set theory, we first make a few basic comments about the relevant first order logic. We will give a somewhat more detailed discussion later, but

More information

UPDATES OF LOGIC PROGRAMS

UPDATES OF LOGIC PROGRAMS Computing and Informatics, Vol. 20, 2001,????, V 2006-Nov-6 UPDATES OF LOGIC PROGRAMS Ján Šefránek Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University,

More information

3515ICT Theory of Computation Turing Machines

3515ICT Theory of Computation Turing Machines Griffith University 3515ICT Theory of Computation Turing Machines (Based loosely on slides by Harald Søndergaard of The University of Melbourne) 9-0 Overview Turing machines: a general model of computation

More information

1/9. Locke 1: Critique of Innate Ideas

1/9. Locke 1: Critique of Innate Ideas 1/9 Locke 1: Critique of Innate Ideas This week we are going to begin looking at a new area by turning our attention to the work of John Locke, who is probably the most famous English philosopher of all

More information

Case studies: Outline. Requirement Engineering. Case Study: Automated Banking System. UML and Case Studies ITNP090 - Object Oriented Software Design

Case studies: Outline. Requirement Engineering. Case Study: Automated Banking System. UML and Case Studies ITNP090 - Object Oriented Software Design I. Automated Banking System Case studies: Outline Requirements Engineering: OO and incremental software development 1. case study: withdraw money a. use cases b. identifying class/object (class diagram)

More information

So let us begin our quest to find the holy grail of real analysis.

So let us begin our quest to find the holy grail of real analysis. 1 Section 5.2 The Complete Ordered Field: Purpose of Section We present an axiomatic description of the real numbers as a complete ordered field. The axioms which describe the arithmetic of the real numbers

More information

Arguments and Dialogues

Arguments and Dialogues ONE Arguments and Dialogues The three goals of critical argumentation are to identify, analyze, and evaluate arguments. The term argument is used in a special sense, referring to the giving of reasons

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 Proofs Intuitively, the concept of proof should already be familiar We all like to assert things, and few of us

More information

Collated Food Requirements. Received orders. Resolved orders. 4 Check for discrepancies * Unmatched orders

Collated Food Requirements. Received orders. Resolved orders. 4 Check for discrepancies * Unmatched orders Introduction to Data Flow Diagrams What are Data Flow Diagrams? Data Flow Diagrams (DFDs) model that perspective of the system that is most readily understood by users the flow of information around the

More information

CS 3719 (Theory of Computation and Algorithms) Lecture 4

CS 3719 (Theory of Computation and Algorithms) Lecture 4 CS 3719 (Theory of Computation and Algorithms) Lecture 4 Antonina Kolokolova January 18, 2012 1 Undecidable languages 1.1 Church-Turing thesis Let s recap how it all started. In 1990, Hilbert stated a

More information

Mathematical Induction

Mathematical Induction Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

More information

GRAPH THEORY LECTURE 4: TREES

GRAPH THEORY LECTURE 4: TREES GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection

More information

Handout #1: Mathematical Reasoning

Handout #1: Mathematical Reasoning Math 101 Rumbos Spring 2010 1 Handout #1: Mathematical Reasoning 1 Propositional Logic A proposition is a mathematical statement that it is either true or false; that is, a statement whose certainty or

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10 CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10 Introduction to Discrete Probability Probability theory has its origins in gambling analyzing card games, dice,

More information

Math/Stats 425 Introduction to Probability. 1. Uncertainty and the axioms of probability

Math/Stats 425 Introduction to Probability. 1. Uncertainty and the axioms of probability Math/Stats 425 Introduction to Probability 1. Uncertainty and the axioms of probability Processes in the real world are random if outcomes cannot be predicted with certainty. Example: coin tossing, stock

More information

C H A P T E R Regular Expressions regular expression

C H A P T E R Regular Expressions regular expression 7 CHAPTER Regular Expressions Most programmers and other power-users of computer systems have used tools that match text patterns. You may have used a Web search engine with a pattern like travel cancun

More information

Software Modeling and Verification

Software Modeling and Verification Software Modeling and Verification Alessandro Aldini DiSBeF - Sezione STI University of Urbino Carlo Bo Italy 3-4 February 2015 Algorithmic verification Correctness problem Is the software/hardware system

More information

Lecture 16 : Relations and Functions DRAFT

Lecture 16 : Relations and Functions DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/29/2011 Lecture 16 : Relations and Functions Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT In Lecture 3, we described a correspondence

More information

6.3 Conditional Probability and Independence

6.3 Conditional Probability and Independence 222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

More information

Full and Complete Binary Trees

Full and Complete Binary Trees Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full

More information

Lecture 17 : Equivalence and Order Relations DRAFT

Lecture 17 : Equivalence and Order Relations DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/31/2011 Lecture 17 : Equivalence and Order Relations Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last lecture we introduced the notion

More information

A first step towards modeling semistructured data in hybrid multimodal logic

A first step towards modeling semistructured data in hybrid multimodal logic A first step towards modeling semistructured data in hybrid multimodal logic Nicole Bidoit * Serenella Cerrito ** Virginie Thion * * LRI UMR CNRS 8623, Université Paris 11, Centre d Orsay. ** LaMI UMR

More information

Data exchange. L. Libkin 1 Data Integration and Exchange

Data exchange. L. Libkin 1 Data Integration and Exchange Data exchange Source schema, target schema; need to transfer data between them. A typical scenario: Two organizations have their legacy databases, schemas cannot be changed. Data from one organization

More information

Eco-Efficient Packaging Material Selection for Fresh Produce: Industrial Session

Eco-Efficient Packaging Material Selection for Fresh Produce: Industrial Session Eco-Efficient Packaging Material Selection for Fresh Produce: Industrial Session Nouredine Tamani 1, Patricio Mosse 2, Madalina Croitoru 1, Patrice Buche 2, Valérie Guillard 2, Carole Guillaume 2, Nathalie

More information

Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom Conditional Probability, Independence and Bayes Theorem Class 3, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Know the definitions of conditional probability and independence

More information

Semantic Errors in SQL Queries: A Quite Complete List

Semantic Errors in SQL Queries: A Quite Complete List Semantic Errors in SQL Queries: A Quite Complete List Christian Goldberg, Stefan Brass Martin-Luther-Universität Halle-Wittenberg {goldberg,brass}@informatik.uni-halle.de Abstract We investigate classes

More information

Regular Languages and Finite Automata

Regular Languages and Finite Automata Regular Languages and Finite Automata 1 Introduction Hing Leung Department of Computer Science New Mexico State University Sep 16, 2010 In 1943, McCulloch and Pitts [4] published a pioneering work on a

More information

A Multi-agent System for Knowledge Management based on the Implicit Culture Framework

A Multi-agent System for Knowledge Management based on the Implicit Culture Framework A Multi-agent System for Knowledge Management based on the Implicit Culture Framework Enrico Blanzieri Paolo Giorgini Fausto Giunchiglia Claudio Zanoni Department of Information and Communication Technology

More information

Lecture 1. Basic Concepts of Set Theory, Functions and Relations

Lecture 1. Basic Concepts of Set Theory, Functions and Relations September 7, 2005 p. 1 Lecture 1. Basic Concepts of Set Theory, Functions and Relations 0. Preliminaries...1 1. Basic Concepts of Set Theory...1 1.1. Sets and elements...1 1.2. Specification of sets...2

More information

INCIDENCE-BETWEENNESS GEOMETRY

INCIDENCE-BETWEENNESS GEOMETRY INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

BPMN Business Process Modeling Notation

BPMN Business Process Modeling Notation BPMN (BPMN) is a graphical notation that describes the logic of steps in a business process. This notation has been especially designed to coordinate the sequence of processes and messages that flow between

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Turing Degrees and Definability of the Jump. Theodore A. Slaman. University of California, Berkeley. CJuly, 2005

Turing Degrees and Definability of the Jump. Theodore A. Slaman. University of California, Berkeley. CJuly, 2005 Turing Degrees and Definability of the Jump Theodore A. Slaman University of California, Berkeley CJuly, 2005 Outline Lecture 1 Forcing in arithmetic Coding and decoding theorems Automorphisms of countable

More information

Chapter 11 Number Theory

Chapter 11 Number Theory Chapter 11 Number Theory Number theory is one of the oldest branches of mathematics. For many years people who studied number theory delighted in its pure nature because there were few practical applications

More information

MATH10040 Chapter 2: Prime and relatively prime numbers

MATH10040 Chapter 2: Prime and relatively prime numbers MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive

More information

8 Divisibility and prime numbers

8 Divisibility and prime numbers 8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

More information

Integrating Pattern Mining in Relational Databases

Integrating Pattern Mining in Relational Databases Integrating Pattern Mining in Relational Databases Toon Calders, Bart Goethals, and Adriana Prado University of Antwerp, Belgium {toon.calders, bart.goethals, adriana.prado}@ua.ac.be Abstract. Almost a

More information

Formal Handling of Threats and Rewards in a Negotiation Dialogue

Formal Handling of Threats and Rewards in a Negotiation Dialogue Formal Handling of Threats and Rewards in a Negotiation Dialogue Leila Amgoud IRIT - CNRS 118, route de Narbonne Toulouse, France amgoud@irit.fr Henri Prade IRIT - CNRS 118, route de Narbonne Toulouse,

More information

GOAL-BASED INTELLIGENT AGENTS

GOAL-BASED INTELLIGENT AGENTS International Journal of Information Technology, Vol. 9 No. 1 GOAL-BASED INTELLIGENT AGENTS Zhiqi Shen, Robert Gay and Xuehong Tao ICIS, School of EEE, Nanyang Technological University, Singapore 639798

More information

Mathematical Induction

Mathematical Induction Mathematical Induction In logic, we often want to prove that every member of an infinite set has some feature. E.g., we would like to show: N 1 : is a number 1 : has the feature Φ ( x)(n 1 x! 1 x) How

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, 2010. Class 4 Nancy Lynch

6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, 2010. Class 4 Nancy Lynch 6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, 2010 Class 4 Nancy Lynch Today Two more models of computation: Nondeterministic Finite Automata (NFAs)

More information

Basic Probability Concepts

Basic Probability Concepts page 1 Chapter 1 Basic Probability Concepts 1.1 Sample and Event Spaces 1.1.1 Sample Space A probabilistic (or statistical) experiment has the following characteristics: (a) the set of all possible outcomes

More information

Data Structures Fibonacci Heaps, Amortized Analysis

Data Structures Fibonacci Heaps, Amortized Analysis Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

More information

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92. Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

More information

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

More information

INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS

INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS INTERNATIONAL FOR ASSURANCE ENGAGEMENTS (Effective for assurance reports issued on or after January 1, 2005) CONTENTS Paragraph Introduction... 1 6 Definition and Objective of an Assurance Engagement...

More information

TEACHER IDENTITY AND DIALOGUE: A COMMENT ON VAN RIJSWIJK, AKKERMAN & KOSTER. Willem Wardekker VU University Amsterdam, The Netherlands

TEACHER IDENTITY AND DIALOGUE: A COMMENT ON VAN RIJSWIJK, AKKERMAN & KOSTER. Willem Wardekker VU University Amsterdam, The Netherlands International Journal for Dialogical Science Spring 2013. Vol. 7, No. 1, 61-65 Copyright 2013 by Willem Wardekker TEACHER IDENTITY AND DIALOGUE: A COMMENT ON VAN RIJSWIJK, AKKERMAN & KOSTER Willem Wardekker

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Sharing online cultural experiences: An argument-based approach

Sharing online cultural experiences: An argument-based approach Sharing online cultural experiences: An argument-based approach Leila AMGOUD Roberto CONFALONIERI Dave DE JONGE Mark D INVERNO Katina HAZELDEN Nardine OSMAN Henri PRADE Carles SIERRA Matthew YEE-KING Abstract.

More information

Solutions Q1, Q3, Q4.(a), Q5, Q6 to INTLOGS16 Test 1

Solutions Q1, Q3, Q4.(a), Q5, Q6 to INTLOGS16 Test 1 Solutions Q1, Q3, Q4.(a), Q5, Q6 to INTLOGS16 Test 1 Prof S Bringsjord 0317161200NY Contents I Problems 1 II Solutions 3 Solution to Q1 3 Solutions to Q3 4 Solutions to Q4.(a) (i) 4 Solution to Q4.(a)........................................

More information

Passive Threats among Agents in State Oriented Domains

Passive Threats among Agents in State Oriented Domains Passive Threats among Agents in State Oriented Domains Yair B. Weinberger and Jeffrey S. Rosenschein ½ Abstract. Previous work in multiagent systems has used tools from game theory to analyze negotiation

More information

CHAPTER 2. Logic. 1. Logic Definitions. Notation: Variables are used to represent propositions. The most common variables used are p, q, and r.

CHAPTER 2. Logic. 1. Logic Definitions. Notation: Variables are used to represent propositions. The most common variables used are p, q, and r. CHAPTER 2 Logic 1. Logic Definitions 1.1. Propositions. Definition 1.1.1. A proposition is a declarative sentence that is either true (denoted either T or 1) or false (denoted either F or 0). Notation:

More information

Kant s Fundamental Principles of the Metaphysic of Morals

Kant s Fundamental Principles of the Metaphysic of Morals Kant s Fundamental Principles of the Metaphysic of Morals G. J. Mattey Winter, 2015/ Philosophy 1 The Division of Philosophical Labor Kant generally endorses the ancient Greek division of philosophy into

More information

Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R

Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R Binary Search Trees A Generic Tree Nodes in a binary search tree ( B-S-T) are of the form P parent Key A Satellite data L R B C D E F G H I J The B-S-T has a root node which is the only node whose parent

More information

Victor Shoup Avi Rubin. fshoup,rubing@bellcore.com. Abstract

Victor Shoup Avi Rubin. fshoup,rubing@bellcore.com. Abstract Session Key Distribution Using Smart Cards Victor Shoup Avi Rubin Bellcore, 445 South St., Morristown, NJ 07960 fshoup,rubing@bellcore.com Abstract In this paper, we investigate a method by which smart

More information

Harnessing Ontologies for Argument-based Decision-Making in Breast Cancer

Harnessing Ontologies for Argument-based Decision-Making in Breast Cancer Harnessing Ontologies for Argument-based Decision-Making in Breast Cancer Matt Williams London Research Institute, Cancer Research UK 44 Lincoln Inn Fields, London WC2A 3PX Matthew.Williams@cancer.org.uk

More information

Lecture 5 - CPA security, Pseudorandom functions

Lecture 5 - CPA security, Pseudorandom functions Lecture 5 - CPA security, Pseudorandom functions Boaz Barak October 2, 2007 Reading Pages 82 93 and 221 225 of KL (sections 3.5, 3.6.1, 3.6.2 and 6.5). See also Goldreich (Vol I) for proof of PRF construction.

More information

Exploring the practical benefits of argumentation in multi-agent deliberation

Exploring the practical benefits of argumentation in multi-agent deliberation Exploring the practical benefits of argumentation in multi-agent deliberation This research was supported by the Netherlands Organisation for Scientific Research (NWO) under project number 612.066.823.

More information

The Entity-Relationship Model

The Entity-Relationship Model The Entity-Relationship Model 221 After completing this chapter, you should be able to explain the three phases of database design, Why are multiple phases useful? evaluate the significance of the Entity-Relationship

More information

Why & How: Business Data Modelling. It should be a requirement of the job that business analysts document process AND data requirements

Why & How: Business Data Modelling. It should be a requirement of the job that business analysts document process AND data requirements Introduction It should be a requirement of the job that business analysts document process AND data requirements Process create, read, update and delete data they manipulate data. Process that aren t manipulating

More information

Math 3000 Section 003 Intro to Abstract Math Homework 2

Math 3000 Section 003 Intro to Abstract Math Homework 2 Math 3000 Section 003 Intro to Abstract Math Homework 2 Department of Mathematical and Statistical Sciences University of Colorado Denver, Spring 2012 Solutions (February 13, 2012) Please note that these

More information

Automata-based Verification - I

Automata-based Verification - I CS3172: Advanced Algorithms Automata-based Verification - I Howard Barringer Room KB2.20: email: howard.barringer@manchester.ac.uk March 2006 Supporting and Background Material Copies of key slides (already

More information

The «include» and «extend» Relationships in Use Case Models

The «include» and «extend» Relationships in Use Case Models The «include» and «extend» Relationships in Use Case Models Introduction UML defines three stereotypes of association between Use Cases, «include», «extend» and generalisation. For the most part, the popular

More information

Sources of International Law: An Introduction. Professor Christopher Greenwood

Sources of International Law: An Introduction. Professor Christopher Greenwood Sources of International Law: An Introduction by Professor Christopher Greenwood 1. Introduction Where does international law come from and how is it made? These are more difficult questions than one might

More information

THE SEARCH FOR NATURAL DEFINABILITY IN THE TURING DEGREES

THE SEARCH FOR NATURAL DEFINABILITY IN THE TURING DEGREES THE SEARCH FOR NATURAL DEFINABILITY IN THE TURING DEGREES ANDREW E.M. LEWIS 1. Introduction This will be a course on the Turing degrees. We shall assume very little background knowledge: familiarity with

More information

Logical Design of Audit Information in Relational Databases

Logical Design of Audit Information in Relational Databases Essay 25 Logical Design of Audit Information in Relational Databases Sushil Jajodia, Shashi K. Gadia, and Gautam Bhargava In the Trusted Computer System Evaluation Criteria [DOD85], the accountability

More information

Communication Diagrams

Communication Diagrams Communication Diagrams Massimo Felici Realizing Use cases in the Design Model 1 Slide 1: Realizing Use cases in the Design Model Use-case driven design is a key theme in a variety of software processes

More information

CRITICAL PATH ANALYSIS AND GANTT CHARTS

CRITICAL PATH ANALYSIS AND GANTT CHARTS CRITICAL PATH ANALYSIS AND GANTT CHARTS 1. An engineering project is modelled by the activity network shown in the figure above. The activities are represented by the arcs. The number in brackets on each

More information

A single minimal complement for the c.e. degrees

A single minimal complement for the c.e. degrees A single minimal complement for the c.e. degrees Andrew Lewis Leeds University, April 2002 Abstract We show that there exists a single minimal (Turing) degree b < 0 s.t. for all c.e. degrees 0 < a < 0,

More information

Measuring the Performance of an Agent

Measuring the Performance of an Agent 25 Measuring the Performance of an Agent The rational agent that we are aiming at should be successful in the task it is performing To assess the success we need to have a performance measure What is rational

More information

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling. Tim Nieberg Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

More information

Fundamentele Informatica II

Fundamentele Informatica II Fundamentele Informatica II Answer to selected exercises 1 John C Martin: Introduction to Languages and the Theory of Computation M.M. Bonsangue (and J. Kleijn) Fall 2011 Let L be a language. It is clear

More information

KSE Comp. support for the writing process 2 1

KSE Comp. support for the writing process 2 1 KSE Comp. support for the writing process 2 1 Flower & Hayes cognitive model of writing A reaction against stage models of the writing process E.g.: Prewriting - Writing - Rewriting They model the growth

More information

Concurrency Control. Chapter 17. Comp 521 Files and Databases Fall 2010 1

Concurrency Control. Chapter 17. Comp 521 Files and Databases Fall 2010 1 Concurrency Control Chapter 17 Comp 521 Files and Databases Fall 2010 1 Conflict Serializable Schedules Recall conflicts (WR, RW, WW) were the cause of sequential inconsistency Two schedules are conflict

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

More information

How should we think about the testimony of others? Is it reducible to other kinds of evidence?

How should we think about the testimony of others? Is it reducible to other kinds of evidence? Subject: Title: Word count: Epistemology How should we think about the testimony of others? Is it reducible to other kinds of evidence? 2,707 1 How should we think about the testimony of others? Is it

More information

Factors influencing whether, why and what to outsource include the following:

Factors influencing whether, why and what to outsource include the following: INSTITUTIONAL INVESTMENT & FIDUCIARY SERVICES: Investment Basics: Outsourcing Investment Services By Samuel W. Halpern, Area Executive Vice President (Ret.) Whether, Why and What to Outsource From the

More information

Solutions of Linear Equations in One Variable

Solutions of Linear Equations in One Variable 2. Solutions of Linear Equations in One Variable 2. OBJECTIVES. Identify a linear equation 2. Combine like terms to solve an equation We begin this chapter by considering one of the most important tools

More information