Fachberichte INFORMATIK

Size: px
Start display at page:

Download "Fachberichte INFORMATIK"

Transcription

1 Knowledge Representation and Automated Reasoning for E-Learning Systems Peter Baumgartner, Paul A. Cairns, Michael Kohlhase, Erica Melis (Eds.) 16/2003 Fachberichte INFORMATIK Universität Koblenz-Landau Institut für Informatik, Universitätsstr. 1, D Koblenz WWW:

2

3 Foreword Numerous challenges have to be addressed when building e-learning systems. This is witnessed by the enormous breadth of research in the AI in Education field, both with respect to intersection with neighbouring scientific disciplines (cognitive science, human-computer interaction, etc.) and with respect to the developed tools and techniques. E-learning presents particular challenges as there is the need to represent the learning goals, the learning achievements and the domain knowledge. Also, learners need appropriate support at appropriate times in their learning. All of these require student models, comparison of models and retrieval of course materials. Furthermore, with a shift to web-based platforms, open and sharable knowledge representation schemes with a well-understood semantics and adequate reasoning services will become more and more important. The workshop focuses on the application of AI-themes, such as Knowledge Representation, Planning and Automated Reasoning as they apply to e-learning environments. The purpose of the workshop is to bring together people that pursue this line of research in order to compare their approaches, stimulate further work, etc. We have therefore invited reports on system architectures, applications and experiences of use. Indeed, as the contributions to this proceedings show, there is considerable interest and activity in this direction. The organizers Organizers Peter Baumgartner Universität Koblenz-Landau, Germany peter@uni-koblenz.de Paul A. Cairns University College London, UK p.cairns@ucl.ac.uk Michael Kohlhase Carnegie Mellon University, USA kohlhase+@cs.cmu.edu Erica Melis Deutsches Forschungszentrum für Künstliche Intelligenz, Germany melis@dfki.de

4 Contents Esma Aïmeur, Gilles Brassard, Sébastien Gambs: Towards a New Knowledge Elicitation Algorithm 1 Peter Baumgartner, Ulrich Furbach, Margret Gross-Hardt, Alex Sinner: Living Book Deduction, Slicing, Interaction 8 Christoph Benzmüller, Armin Fiedler, Malte Gabsdil, Helmut Horacek, Ivana Kruijff-Korbayová, Manfred Pinkal, Jörg Siekmann, Dimitra Tsovaltzi, Bao Quoc Vo, Magdalena Wolska: Tutorial Dialogs on Mathematical Proofs 12 Armin Fiedler, Dimitra Tsovaltzi: Automating Hinting in an Intelligent Tutorial Dialog System for Mathematics 23 Alfredo Garro, Nicola Leone, Francesco Ricca: Logic Based Agents for E-learning 36 Marilza Antunes de Lemos, Leliane Nunes de Barros, Roseli de Deus Lopes: Modeling Plans and Goals in a Programming Intelligent Tutoring System 46 Permanand Mohan, Jim Greer, Gordon McCalla: Instructional Planning with Learning Objects 52 Andrew Potter: Invoking the Cyber-Muse: Automatic Essay Assessment in the Online Learning Environment 59 J.P. Spagnol: Modelling and Automation of Reasoning in Geometry. The ARGOS System: a Learning Companion for High-School Pupils 63 Tiffany Y. Tang, Gordon McCalla: Towards Pedagogy-Oriented Paper Recommendations and Adaptive Annotations for a Web-Based Learning System 72

5 Towards a New Knowledge Elicitation Algorithm Esma Aïmeur, Gilles Brassard and Sébastien Gambs Université de Montréal, Département IRO C.P. 6128, Succursale Centre-Ville, Montréal (Québec), H3C 3J7 Canada Abstract The task of transferring knowledge from a human to a computer has always been challenging, and even more so when the knowledge comes from multiple sources. It has been the role of knowledge acquisition tools to help in solving this problem. In this paper, we introduce a novel elicitation algorithm for knowledge base construction. An experiment is currently under way to evaluate how natural this algorithm feels to humans and how well it reflects their way of reasoning. 1 Introduction This paper addresses the problem of finding algorithms that can help structure knowledge acquisition. Our main motivation originates with our need to elicit the curriculum of QUANTI [Aïmeur et al., 2001a; 2001b; 2002], an Intelligent Tutoring System currently under construction to teach Quantum Information Processing (QIP) [Chuang and Nielsen, 2000]. The challenge comes from the fact that QIP is multidisciplinary (physics, computer science, mathematics and chemistry) and that we need to elicit knowledge from multiple experts in each of these fields. Consider an expert in a particular field. The goal is to find a way to assist him in the task of expressing his knowledge explicitly. The process should be as natural and intuitive as possible for the expert. The knowledge representation considered in this paper is a semantic network,agraphin which the nodes represent pieces of knowledge and the edges between nodes symbolize relationships between them. Examples of relationships are association, composition, equivalence, exclusion, etc. Semantic networks are good candidates for applications in knowledge engineering and knowledge acquisition. We give in the Appendix an example of the first-level semantic network that corresponds to the concept of Quantum Teleportation [Bennett et al., 1993] in the context of Quantum Information Processing. Semantic networks are the ancestors of more elaborate representations such as conceptual graphs developed by Sowa [Sowa, 1984; 2000]. By as natural and intuitive as possible, we mean an algorithm for graph creation that matches most closely the way the expert organizes knowledge. We should keep in mind that an expert in a domain is rarely also a knowledge engineer; thus having the knowledge does not necessarily imply the ability of transferring it. An elicitation algorithm close to the mental behaviour of the expert should make the task of transferring knowledge easier and more efficient, both in terms of time and accuracy. An overview of the different knowledge acquisition methods is given in Section 2. Computer scientists are often acquainted with algorithms for graph exploration, such as the depth-first and breadth-first searches, but not with their counterparts for the elicitation of a graph. These variants of depth-first and breadth-first search are described in Section 3. The main contribution of this paper is to propose in Section 4 a new graph elicitation algorithm called HAKE, which is a hybrid between the depth-first and breadth-first approaches. Note once more that the task here is not to explore a graph that already exists but rather to assist the expert in creating one. For instance, an elicitation algorithm could suggest to the expert the next node from which new links could be defined. Whichever elicitation algorithm is used, however, it is important to remember that its purpose is to guide an expert, not to constrain him. Therefore, it must always be possible for experts to create, modify or delete nodes and edges at any time and in any order they wish. Given several elicitation algorithms, deciding which one suits the expert best is highly subjective. For this reason, an experiment is currently under way in order to compare the benefits of HAKE with other elicitation algorithms based on depth-first, breadth-first. This experiment is briefly described in Section 5. Finally, we conclude with Section 6. 2 Knowledge Acquisition Methods Knowledge acquisition is a subfield of artificial intelligence concerned with the development of methods, software and tools for building knowledge bases. In computer science, it has been used for a long time to construct the knowledge base of expert systems. There are three main categories of knowledge acquisition methods, namely direct elicitation techniques, machine-aided techniques and machine learning techniques. The purpose of direct elicitation techniques is to discover what knowledge the expert uses and the methods he employs for problem solving within a particular domain [Boose, 1989; Cooke, 1994]. Whereas direct elicitation techniques require interaction between the expert and

6 the knowledge engineer, with machine-aided techniques the expert explains his knowledge directly to the computer in a way that is semi-automatic and interactive. In particular, knowledge acquisition tools use a conceptual model to interact with the user, thus hiding the complexity and the unfamiliarity of the symbolic model upon which the knowledge base is constructed [Clark et al., 2001; Gaines and Shaw, 1993; Kim and Gil, 2002; Schreiber et al., 1999; Schreiber, 2001]. The elicitation techniques considered in this paper belong to this category, and their main purpose is to assist in the creation of a knowledge base to serve as curriculum in Intelligent Tutoring Systems. Finally, the goal of machine learning knowledge acquisition techniques is to automate a significant part of the knowledge acquisition process [De Jong et al., 1993; Gaines, 1996; Krishnan et al., 1999; Tecuci and Kodratoff, 1995; Webb et al., 1999; Wetjers and Paredis, 2002; Zhou and Chen, 2002]. Aside from these three categories, software has been developed, such as meta-tools, to generate on demand knowledge acquisition tools adapted to a particular domain [Gennari et al., 2003]. 3 Depth-First and Breadth-First Elicitation Algorithms The depth-first and breadth-first approaches to knowledge elicitation are directly inspired from their well-known cousins used in graph searching. However, the task of interest here is not to explore an existing graph but rather to assist the expert in creating a graph. The algorithm used affects the order in which the nodes and edges are created. In general, knowledge elicitation techniques could be used to create new nodes (objects) in addition to defining edges (relations) between nodes. For the sake of simplicity, however, we assume in the formal descriptions below, as well as in the experimentation under way, that the set of objects (the dictionary) has already been defined. We use the elicitation process to assist the user in creating edges between nodes, as well as in defining the type of nodes and edges. 3.1 Depth-first approach With the depth-first approach, the expert starts creating the graph from the root node and creates an edge between the root and a child node; then, the focus moves directly to this child node without asking the expert first to enter all the children of the root node. The elicitation process continues in the same manner: each time an edge is created between a node and its child, the elicitation continues directly with the child node. When the elicitation reaches a dead end (i.e. a leaf), the algorithm uses backtracking (implicit in recursivity) in order to go back to the parent node and continue the elicitation. Numbers on the edges of Fig. 1 illustrate the order in which an expert might use a depth-first approach to elicit a concept from ecology in which we are interested in the food chain: there is an edge from one edge to another if the former is eaten by the latter. A human creating a graph in this manner can be thought of as someone who keeps following the trail of his ideas. He starts by thinking of an object A, and then he imagines an association between this object A and some other object B. Now, he has to figure out an association between this object B and yet another object. This process is repeated until he is no longer able to link the object he is currently thinking of with another object. When this point is reached, the expert goes back to the previous object, which corresponds to backtracking, and he continues the process. The advantage of this technique is that the expert can follow an idea until the end without any delay: he does not have to wait until he has thought of all the children of a node before continuing the elicitation with the next node. Also, there is no risk for the expert of getting lost because there is never a conceptual jump (see Section 3.2) between two consecutive nodes in the elicitation process: the distance in terms of the number of edges between two consecutive nodes is always 1. The main drawback of the depth-first approach is that, by the time the expert returns to the root after he has elicited all the subgraph of the first child, he may have lost track of the list of children he might have had in mind for the root at the start of the process. More formally, the depth-first approach to graph elicitation is given below in pseudo-code. Here, we say that a node has been visited if the expert has considered it at least once by creating a link to it. (The root is also considered visited at the outset even if no other node points to it.) Process depth initialization(root) for each word in Dictionary do mark word has being not visited ask expert to choose the type of node root mark root has having been visited call depth elicitation(root) Process depth elicitation(node) repeat ask expert to select from Dictionary an entry succ for node to point at ask expert to choose the type of edge from node to succ if succ has not been visited then ask expert to choose the type of node succ mark succ has having been visited call depth elicitation(succ) until expert does not wish to select another successor for node 3.2 Breadth-first approach According to the breadth-first approach, which is illustrated in Fig. 2 with the same example, the expert starts from the root node and specifies all of its children by creating the corresponding edges. The root is considered to be at level 0 and its children at level 1. Next, the expert considers the nodes at level 1 one by one, and he creates the edges between them and their children; these new nodes are at level 2. The process continues in the same manner, level by level, until the entire graph has been elicited. The main advantage of the breadth-first approach is that it allows the expert to focus on a particular concept and to give all the relations between this concept and other concepts before going on. Therefore, the expert will not have to come 2

7 Figure 1: Illustration of the depth-first algorithm back to this concept later, so that he will need to concentrate on it only once. During the elicitation process, a conceptual jump occurs when the distance between two successively elicited nodes is large. The larger is the distance between these nodes, the greater is the risk that the expert might be confused or delayed. He may not understand the relationship between the two nodes, nor why they follow each other in the elicitation process. One of the drawbacks of the breadth-first approach is that if the depth of the graph is high, there will sometimes be conceptual jumps between two consecutively elicited concepts. The deeper the graph, the larger and the more numerous the conceptual jumps. It may become difficult for the expert to keep track of the grand picture. Process breadth elicitation(root) for each word in Dictionary do sf mark word has being not visited initialize queue to an empty queue ask expert to choose the type of node root mark root has having been visited add root to queue repeat let node be the first element of queue remove node from queue repeat ask expert to select from Dictionary an entry succ for node to point at if succ has not been visited then ask expert to choose the type of node succ mark succ has having been visited add succ to queue ask expert to choose the type of edge from node to succ until expert does not wish to select another successor for node until queue is empty Figure 2: Illustration of the breadth-first algorithm 4 The HAKE Elicitation Algorithm Our novel elicitation algorithm is a hybrid between the depthfirst and breadth-first approaches hence its name HAKE: Hybrid Algorithm for Knowledge Elicitation. Our purpose in designing HAKE was to capitalize on the strengths and avoid the weaknesses of the earlier approaches. As already mentioned, its purpose is not to explore a graph but to guide the expert in the creation of the graph. It begins as with breadthfirst elicitation: the expert starts from the root node and specifies all of its children by creating the corresponding edges. But then, the focus is set on the first child, as with depthfirst elicitation, and the expert is asked to proceed recursively and elicit the entire subgraph accessible from that child, beginning with the elicitation of all of its children. When the process reaches a dead end because the first subgraph is complete, we backtrack to reset the focus on the second child of the root node, and we elicit the entire second subgraph before backtracking again to the next child of the root. This process is repeated until the focus has been set successively on each child of the root and the entire graph has been elicited. In addition to the earlier notion of a node having been visited, we need in the pseudo-code below to say that a node has been elicited when all the relations going from this node to other nodes have been defined. Again, the food-chain example is used to illustrate HAKE in Fig. 3. Process HAKE initialization(root) for each word in Dictionary do mark word has being not visited and not elicited ask expert to choose the type of node root mark root has having been visited call HAKE elicitation(root) Process HAKE elicitation(node) initialize queue to an empty queue breadth-first selection of all the immediate successors for node repeat ask expert to select from Dictionary an entry succ for node to point at 3

8 if succ has not been visited then ask expert to choose the type of node succ mark succ has having been visited if succ has not been elicited then add succ to queue ask expert to choose the type of edge from node to succ until expert does not wish to select another successor for node mark node has having been elicited depth-first elicitation of those successors while queue is not empty do let succ be the first element of queue remove succ from queue if succ has not been elicited then recursively call HAKE elicitation(succ) Figure 3: Illustration of HAKE 5 Evaluation of HAKE In the preceding sections, we have explained why we believe that classical depth-first and breadth-first approaches to knowledge elicitation are not appropriate. To remedy the perceived shortcomings, we have presented HAKE, our novel knowledge elicitation algorithm. However, it is not possible to give a mathematical proof that HAKE is better than its competitors. Whereas graph exploration algorithms can easily be compared in terms of time, space, and the quality of the resulting solution, elicitation algorithms cannot be compared in such a straightforward manner. The comparison must be done empirically, by testing the algorithms directly with humans. Hence, we designed a preliminary experiment in the fall of 2002, and we are currently running a more elaborate evaluation procedure, whose purpose it to determine whether or not our intuition that HAKE would outperform more classical approaches is justified. In this section, we briefly report on both experiments. As explained in the Introduction, the point of elicitation algorithms is to guide experts in the creation of a conceptual graph. It is definitely not meant to constrain them. At any time during an elicitation process, regardless of which guiding approach is taken, it should be easy for experts to reject suggestions from the system and to continue elicitation in any order they please. In other words, it must always be possible for an expert to decide on new nodes and links to be created or deleted, regardless of any computer logic. For instance, in the case of a breadth-first elicitation, we should allow an expert to come back to a node previously visited if a new idea comes up, even though this would not happen in a pure breadth-first approach. Nevertheless, this flexibility was not allowed during our experimentations because our purpose is to compare the raw algorithms. 5.1 A preliminary experiment The first experiment was carried out with 45 volunteers using graphical software written in Java that had been specially designed for this purpose. The evaluation consisted of three different parts: free elicitation of a graph given on paper, guided elicitation of the same graph using all three algorithms in turn, and free elicitation of another graph that was not given explicitly. A post-test concluded the experiment. The graph elicited four times in the first two parts was our now-familiar food chain graph shown in Figs. 1, 2 and 3 (without numbers on the edges of course); it contains 11 nodes and 20 edges. This example was chosen because it is a topic about which everyone has some knowledge. The graph elicited in the third part contained 20 nodes and 21 edges. All the actions of the user were monitored and recorded in a log file. An assistant was always available during the experiment. The purpose free elicitation was to gather data on the elicitation order that came naturally to humans. This order was then compared to the order that would have been produced according to all three elicitation approaches. The purpose of guided elicitation was to compare the ease with which humans could use them, both in terms of objective criteria such as speed and number of errors, and subjective criteria such as their perception of how natural the processes seems to be. The results that we obtained in this first experiment were encouraging because HAKE outperformed the breadth-first and depth-first approaches on all our objective criteria. In all cases, the depth-first approach was left far behind and breadth-first was a close second. On the other hand, when we consider the subjective criteria provided by the users, we found that breadth-first came out on top: people generally considered this algorithm to be most natural, easiest to use and requiring less concentration. This time, it is HAKE that came out as a close secondexcept for naturalnessand again depth-first was left far behind. To summarize, HAKE and breadth-first are very close according to every criterion in our study; neither appears to outclass the other in a way that would be indisputable. Unfortunately, this preliminary study suffered from several shortcomings. This prompted us to proceed to another, more thorough experimentation, which is described in the next subsection. In particular, we needed to improve on the following aspects of our methodology. True elicitation processes cannot take place when users are explicitly provided with the graph to be elicited, as we did in the first two parts of our preliminary experiment. How relevant was it to test humans on what 4

9 really amounted to graph exploration using the three algorithms? It is true users had to elicit a graph that was not given to them explicitly in the last part of our evaluation, but regrettably that graph was too close to being a tree, which again misses important aspects of real-life elicitation. Moreover, the metric used to compare the order produced by free elicitation to that would have been produced with guided elicitation was dubious. A more serious flaw in our preliminary experimentation was that we presented the users with the three elicitation algorithms in a fixed order, which certainly biased the study. Finally, a population of 45 volunteers is not statistically sufficient, and the fact that our graphs contained about 20 edges was too small to make a convincing case. 5.2 The current experiment Encouraged by the results of the first evaluation, yet fully aware of its shortcomings, we are currently running an improved experiment. All the inadequacies listed in the last paragraph of the previous subsection have been fixed. In particular, we have already gathered data from more than one hundred volunteers who have spent about one hour each in the process, they were asked to elicit familiar concepts for which they were not supplied with an explicit graph, they were given the opportunity to elicit graphs (not trees) containing as many as 80 edges, they were asked in a random order to try out all three elicitation algorithms, and the metric that we shall use to compare the result free and guided elicitation will be more appropriate. Obviously, it is too early to report on the results of this experiment, since it is currently under way. 6 Conclusions We have designed a novel knowledge elicitation algorithm called HAKE and argued that it is a good compromise between the more classical depth-first and breadth-first approaches because it combines advantages and avoids inconveniences from each. As with breadth-first, it encourages the expert to define at once all the relations between a node and its children, hence the expert needs to concentrate on each node only once. As with depth-first, on the other hand, the expert is given the opportunity to follow the trail of his ideas without being unduly side-tracked of more classical approaches such as breadth-first and depth-first. A preliminary evaluation of our algorithm was sufficiently encouraging to prompt us to undertake a more thorough experiment, which is currently under way. Nevertheless, we realize that other elicitation algorithms are possible, and that it is likely that the best approach is not among the three that we have considered in this paper. Certainly, we do not claim here that HAKE is the ultimate medicine! In fact, it is likely that there is no universally best choice that would benefit every expert when it comes to building a real knowledge base. If we cannot determine that one approach is best, an alternative would be to administer a pretest to each new expert in order to custom fit the elicitation algorithm to his mental process. For this purpose, we would need to observe the expert during free elicitation on several examples. Subsequently, unsupervised learning techniques would be used on the collected data in order to construct an ad hoc algorithm that would best fit the observed behaviour. This adaptive approach, which would certainly be far from trivial, falls outside the scope of this paper; it is left for further studies. A large-scale experimentation in which several approaches would be used by experts to elicit a complete semantic network for instance the knowledge base that will be at the heart of the aforementioned Intelligent Tutoring System QUANTI [Aïmeur et al., 2001a; 2001b; 2002] would provide a more convincing test case. References [Aïmeur et al., 2001a] Aïmeur, E., Blanchard, E., Brassard, G., Fusade, B. and Gambs, S., Designing a Multidisciplinary Curriculum for Quantum Information Processing, Proceedings of AIED 01: Artificial Intelligence in Education, pp , 2001a. [Aïmeur et al., 2001b] Aïmeur, E., Blanchard, E., Brassard, G. and Gambs, S., QUANTI: A Multidisciplinary Knowledge-Based System for Quantum Information Processing, Proceedings of CALIE 01: International Conference on Computer Aided Learning in Engineering Education, pp , 2001b. [Aïmeur et al., 2002] Aïmeur, E., Brassard, G., Dufort, H. and Gambs, S., CLARISSE: A Machine Learning Tool to Initialize Student Models, Proceedings of ITS 02: Intelligent Tutoring Systems, pp , [Bennett et al., 1993] Bennett, C. H., Brassard, G., Crépeau, C.,Jozsa,R.,Peres,A.andWootters,W.K., Teleporting an Unknown Quantum State Via Dual Classical and EinsteinPodolskyRosen Channels, Physical Review Letters 70(13), pp , [Boose, 1989] Boose, J. H., A Survey of Knowledge Acquisition Techniques and Tools, Knowledge Acquisition 1(1), pp. 3 38, [Chuang and Nielsen, 2000] Chuang, I. L. and Nielsen, M., Quantum Computation and Quantum Information, Cambridge University Press, [Clark et al., 2001] Clark, P., Thompson, J., Barker, K., Porter, B., Chaudhri, V., Rodriguez, A., Thomere, J., Mishra, S., Gil, Y., Hayes, P. and Reichherzer, T., Knowledge Entry as the Graphical Assembly of Components, Proceedings of K-CAP: International Conference on Knowledge Capture, [Cooke, 1994] Cooke, N. J., Varieties of Knowledge Elicitation Techniques, International Journal of Human- Computer Studies 41, pp , [De Jong et al., 1993] De Jong, K. A., Spears, W. M. and Gordon, D. F., Using Genetic Algorithms for Concept Learning, Machine Learning 13, pp , [Gaines, 1996] Gaines, B. R., Transforming Rules and Trees into Comprehensible Knowledge Structures, in Advances in Knowledge Discovery and Data Mining, U.M. Fayad, G. Piatetsky-Shapiro, P. Smyth and R. Uthurusamy (editors), MIT Press, pp ,

10 [Gaines and Shaw, 1993] Gaines,B.R.andShaw,M.L.G., Supporting the Creativity Cycle Through Visual Languages, Proceedings of AAAI Spring Symposium: AI and Creativity, AAAI, pp , [Gennari et al., 2003] Gennari, G., Musen. M., Fergerson, R., Grosso, W., Crubézy, M., Eriksson, H., Noy, N. and Tu, S., The Evolution of Protg: An Environment for Knowledge-based Systems Development, International Journal of Human-Computer Studies 58, pp , [Kim and Gil, 2002] Kim, J. and Gil, Y., Deriving Acquisition Principles from Tutoring Principles, Proceedings of ITS 02: Intelligent Tutoring Systems, pp , [Krishnan et al., 1999] Krishnan, R., Sivakumar, G. and Bhattacharya, P., Extracting Decision Trees From Trained Neural Networks, Pattern Recognition 32(12), pp , [Schreiber, 2001] Schreiber, G., CommonKADS, Engineering and Managing Knowledge, The CommonKADS Website, [Schreiber et al., 1999] Schreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., Van de Velde, W. and Wielinga, B., Knowledge Engineering and Management: The CommonKADS Methodology, MIT Press, [Sowa, 1984] Sowa, J., Conceptual Structures: Information Processing in Mind and Machine, Addison-Wesley Publishing Company, [Sowa, 2000] Sowa, J., Knowledge Representations: Logical, Philosophical and Computational Foundations, Brooks Cole Publishing Co., [Tecuci and Kodratoff, 1995] Tecuci, G. and Kodratoff, Y., Machine Learning and Knowledge Acquisition: Integrated Approaches, Academic Press, [Webb et al., 1999] Webb, G., Wells, J. and Zheng, Z., An Experimental Evaluation of Integrating Machine Learning with Knowledge Acquisition, Machine Learning 35(1), pp. 5 23, [Wetjers and Paredis,2002] Weitjers, T. and Paredis, J., Genetic Rule Induction at an Intermediate Level, Knowledge-Based Systems 15(1), pp , [Zhou and Chen, 2002] Zhou, Z.-H. and Chen, Z.-Q., Hybrid Decision Tree, Knowledge-Based Systems 15(8), pp ,

11 Figure 4: Semantic network for the concept of Quantum Teleportation

12 Living Book Deduction, Slicing, Interaction. Peter Baumgartner, Ulrich Furbach, Margret Gross-Hardt, Alex Sinner University of Koblenz, Department of Computer Science Universitätsstr. 1, Koblenz {peter, uli, margret, Abstract The Living Book is a system for the management of personalized and scenario specific teaching material. The main goal of the system is to support the active, explorative and self-determined learning in lectures, tutorials and self study. The Living Book includes a course on logic for computer scientists with a uniform access to various tools like theorem provers and an interactive tableau editor. It is routinely used within teaching undergraduate courses at our university. This paper focuses on the use of theorem proving technology within Living Book, viz., the knowledge management system (KMS). The KMS provides a scenario management component where teachers may describe those parts of given documents that are relevant in order to achieve a certain learning goal. The task of the KMS is to assemble new documents from a database of elementary units called slices (definitions, theorems, and so on) in a scenario-based way (like I want to prepare for an exam and need to learn about resolution ). 1 Overview This system description is about a real-world application of automated deduction. The system that we describe The Living Book is a tool for the management of personalized teaching material. The main goal of the Living Book system is to support the active, explorative and self-determined learning in lectures, tutorials and self study. It includes a course on logic for computer scientists with a uniform access to various tools like theorem provers and an interactive tableau editor 1. This course is routinely used within teaching undergraduate courses at our university. This system integrates a knowledge management system (KMS) that uses theorem proving technology as a core component. The task of the KMS is to assemble documents from a database of elementary units called slices (definitions, theorems, and so on) in a task-oriented way (like I want to prepare for an exam and need to learn about resolution ). We ar- 1 This work is sponsored by EU IST-Grant TRIAL-SOLUTION and Bmbf-Grant In2Math. gue that such tasks can be naturally expressed through logic and that automated deduction technology can be exploited for solving it. In fact, we use first-order logic with a default negation principle and we employ a model computation theorem prover for the reasoning tasks in the KMS. The input of the theorem prover consists of meta data that describe the dependencies between different slices, and logicprogramming style rules that describe the task-specific composition of slices. Additionally, a user model is taken into account that contains information about topics and slices that are known or unknown to a student. A model computed by the system for such input then directly specifies the document to be assembled. Extension of the reader software. We will briefly describe the technological non-deductive framework that Living Book is embedded in, and we will indicate the new features made possible by using deduction. It is the Slicing Information Technology (SIT) [5] for the management of personalized documents. Its kernel has been developed within the ILF-system in the German focus program Deduction, which was carried out in the nineties. The Slicing Book technology handles documents or textbooks that are split into small semantic units, so called slices or units, whichmaybee.g. a paragraph, a definition, or a problem in the original documents. Additional meta data play an important rôle to describe e.g. dependencies among slices, possible between slices from different documents. Also, keywords can be assigned to slices to indicate what the contents of a slice is about. The process of slicing and keyword annotation is partially automated, but usually needs some further manual work 2. The Living Book is embedded in a software called the SIT- Reader, which allows HTML-based access to slices stored on the server with a standard web-browser (cf. the screenshot in Figure 1). To use the system, a user can mark units, like analysis/3/1/15 and analysis/3/1/16 representing e.g. theorem in the analysis book together with its proof. Then she can tell the system that she wants to read the marked unit and gets a corresponding PDF document. If she thinks that this information is not sufficient for her understanding, she can tell the system to include all units which are prerequisites of the units selected. 2 An experienced person needs about two weeks per book.

13 2 The Logic Behind On a higher, research methodological level the deduction technique used in the KMS is intended as a bridging-thegap attempt. On one side, our approach builds on results from the area of logic-based knowledge representation and logic programming concerning the semantics of (disjunctive) logic programs (see [2] for an overview). On the other side, our KRHYPER system used in Living Book is built on calculi and techniques developed for classical first-order classical reasoning, like hyper tableaux [1] and term indexing. To formalize our application domain we found features of both mentioned areas mandatory: the logic should be a first-order logic, it should support a default negation principle, and it should be disjunctive. We will motivate these features now. However, the Living Book goes beyond what is possible with the SIT-Reader alone. For instance, a user may select a certain chapter, say e.g. chapter 3 containing everything about integrals in the analysis book. But instead of requesting all units from this chapter the user wants the system to take into account that she knows e.g. unit 3.1 already, and she possibly wants just the material that is important to prepare for an exam. Based on the units with their meta data the deduction system can exploit this knowledge and combine the LATEX based units to a new document (hopefully) fitting the needs of the user. In conclusion, we not only have the text of the books, we have an entire knowledge base about the material, which can be used by the reader in order to generate personalized documents from the given books. Knowledge Management System. From the viewpoint of deduction, the most interesting component of Living Book is the Knowledge Management System (KMS) with its deduction system KRHYPER. Figure 1 depicts the overall system architecture containing the KMS. As mentioned there, the KMS handles meta data of various types, which are Types of units ( Definition, Theorem, etc), Keywords describing what the units are about ( Integral, etc), References between units (e.g. a Theorem unit about Integral refers to a Definition unit), and what units are Required by other units in order to make sense (e.g. a unit could say solve exercise so-and-so under the assumption... and this unit would (textually) require the mentioned exercise unit.) Further, a User Profile stores what is known and what is unknown to the user. It may heavily influence the computation of the assembly of the final document. The user profile is built from explicit declarations given by the user about units and/or topics that are known/unknown to him. This information is completed by deduction to figure out what other units must also be known/unknown according to the initial profile. The overall system was evaluated in field studies at two German universities within undergraduate math education, and the KMS-based assembly of documents was received very positively by students. First-Order Specifications. In the field of knowledge representation, and in particular when non-monotonic reasoning is of interest, it is common practice to identify a clause with the set of its ground instances. Reasoning mechanisms often suppose that these sets are finite, so that essentially propositional logic results. Such a restriction should not be made in our case. Consider the following clauses, which are actual program code in the KMS about user modeling 3. unknown unit(analysis/1/2/1). (1) known unit(analysis/1/2/ ALL ). (2) refers(analysis/1/2/3, analysis/1/0/4).(3) known unit(book B/Unit B) :- (4) known unit(book A/Unit A), refers(book A/Unit A, Book B/Unit B). The fact (1) states that the unit named analysis/1/2/1 is unknown ; in fact (2), the _ALL_ symbol stands for an anonymous, universally quantified variable. Due to the /- function symbol (and probably others) the Herbrand-Base is infinite. Certainly it is sufficient to take the set of ground instances of these facts up to a certain depth imposed by the books. However, having thus exponentially many facts, this option seems not really a viable one. The rule (4) expresses how to derive the known-status of a unit from a known-status derived so far and using a refers-relation among units. Default Negation. Consider the program code below, which is also about user modeling. The facts (1), (2) and (3) have been described above. It is the purpose of rule (5) to compute the known-status of a unit on a higher level, based on the known units andunknown units. The relation called unknown unit inferred, which is computed by rule (6) is the one exported by the user-model computation to the rest of the program. 3 We use Prolog notation. 9

14 Books Sliced Units Knowledge Management System Overview: "Integral" Analysis Wolter/Dahn Mathematik Luderer Logik Furbach Slicing Metadata Annotation Types Keywords References RequiredUnits Ontology Control Unit User Data Deduction System Logic Programs Internalized Metadata CGI TCP/IP Metadata Database Figure 1: System Architecture %% Actual user knowledge: known unit(analysis/1/2/ ALL ). (1) unknown unit(analysis/1/2/1). (2) refers(analysis/1/2/3, analysis/1/0/4). (3) %% Program rules: known unit inferred(book/unit) :- (5) known unit(book/unit), not unknown unit(book/unit). unknown unit inferred(book/unit) :- (6) not known unit inferred(book/unit). Now, facts (1) and (2) together seem to indicate inconsistent information, as the unit analysis/1/2/1 is both a known unit and a unknown unit. The rule (5), however, resolves this apparent inconsistency. The pragmatically justified intuition behind is to be cautious in such cases: when in doubt, a unit shall belong to the unknown unit inferred relation. Also, if nothing has been said explicitly if a unit is a known unit or an unknown unit, it shall belong to the unknown unit inferred relation as well. Exactly this is achieved by using a default negation operation not, when used as written, and when equipping it with a suitable semantics 4. Disjunctions and Integrity Constraints. Consider the clause set shown below. It states that if there is more than one definition unit of some Keyword, then (at least) one of them must be a computed unit, one that will be included in 4 Observe that with a classical interpretation of not, counterintuitive models exist. Each model computed by our system is a possible model according to [4]. the generated document (the symbol ; means or ). Beyond having proper disjunctions in the head, it is also possible to have rules without a head, which act as integrity constraints. computed unit(book1/unit1) ; computed unit(book2/unit2) :- definition(book1/unit1,keyword), definition(book2/unit2,keyword), not equal(book1/unit1, Book2/Unit2). 3 KRHYPER The logic was chosen carefully: there is a balance between expressivity and the possibility to build an efficient implementation for model computation. It is well-known that for (propositional) stratified normal programs the intended model can be computed in polynomial time, which does not hold e.g. for stable models. For our application, we had no problems to restrict to stratified programs. Of course, disjunctive programs are intractable in general, even without default negation. Now, the calculus derives from the clauses in our example in three steps the following hyper tableau. known unit(analysis/1/2/ ALL ) unknown unit(analysis/1/2/1) refers(analysis/1/2/3, analysis/1/0/4) known unit(analysis/1/0/4) known unit inferred(analysis/1/2/ ALL ) - { known unit inferred(analysis/1/2/1) } 10

15 The topmost three lines stem from the clauses (1), (2) and (3), respectively. By combining clauses (2), (3) and (4) the calculus infers as displayed in the fourth line that the unit analysis/1/0/4 should be known. The concluding line in the derivation is obtained by clauses (1), (2) and (5). It says that in the known_unit_inferred-relation are all sub-units of analysis/1/2 more technically: all ground instances of analysis/1/2/_all_ except for the unit analysis/1/2/1. Being able to represent interpretations that way enables us to not ground the whole program in a preprocessing step, as commonly assumed in related approaches, but to carry out inferences directly at the first-order level. 4 Related Work Interactive and personalized E-learning systems have been discussed in the literature. In [7] an interactive electronic book (i-book) ispresented. This i-book is devoted to teaching adaptive and neural systems for undergraduates in electrical engineering. The salient feature of this book is the tight integration of simulators demonstrating the various topics in adaptive systems and the incremental use of simulation during each chapter in order to develop successively on a certain subject. The i-book, though, does not cope with different learning scenarios or user profiles and offers the same documents for every student. The paper [3] discusses perspectives for electronic books and emphasizes the need for personalized and user specific content. This article concentrates on personalized presentation of content, for instance by means of style sheet application to the content that is delivered to the user. Personalization applied to the content of the material as done in our approach is not considered. Based on an explicit representation of the structure of the concepts in the domain of interest and a user model [8] and [6] dynamically generates instructional courses. These approaches use planning techniques in order to determine the relevant materials on a per user basis. The user model in [8] describes the student s knowledge, and contains history information about previous sessions as well as personal traits and preferences. Interactivity is not integrated in the works described by [8]. In[6] an interactive and adaptive system is presented. Scenarios and user profiles are supported. Here, user profile distinguishes between knowledge, comprehension and application in order to reflect the different status of knowledge during learning. These two approaches differ from Living Book in two main aspect: firstly, in Living Book, we have chosen a deduction based approach instead of planning techniques, and secondly, the user profile adapts according to what the users specify that they know. For instance, the user indicates those units that are already known. From this the system deduces everything that should be known, too, based on dependence relationships between knowledge units. In [8; 6], the user model is adapted based on information the system gathers from a user during a session, e.g. if a certain exercise has been successfully solved. 5 Conclusions The e-learning application as described here represents a nontrivial application for deduction techniques wrt. the size of the fact base. There are roughly facts per book and currently there are 12 books in the repository. The answer times vary between one second and one minute. Although the response times are sometimes still a bit too long, we think that deduction techniques are indeed feasible (we are currently reimplementing KRHYPER and the new implementation will be faster by at least an order of magnitude). Also, we think it becomes obvious from our approach that the techniques used may be applied to, say, document management applications in general, like e.g. the generation of problem-specific Unix man-pages or the assembly of personalized electronic newspapers, too. References [1] Peter Baumgartner, Ulrich Furbach, and Ilkka Niemelä. Hyper Tableaux. In Proc. JELIA 96, number 1126 in Lecture Notes in Artificial Intelligence. European Workshop on Logic in AI, Springer, [2] Gerhard Brewka, Jrgen Dix, and Kurt Konolige. Nonmonotonic Reasoning, volume 73 of Lecture Notes. CSLI Publications, [3] François Bry and Michael Kraus. Perspectives for electronic books in the world wide web age. The Electronic Library Journal, 20(4): , [4] Edward P.F. Chan. A Possible World Semantics for Disjunctive Databases. IEEE Trans. on Knowledge and Data Engineering, 5(2): , [5] Ingo Dahn. Slicing book technology - providing online support for textbooks. In Helmut Hoyer, editor, Proc. of the 20th World Conference on Open and Distance Learning,Düsseldorf/Germany, [6] E. Melis, E. Andres, J. Büdenbender, A. Frischauf, G. Goguadze, P. Libbrecht, M. Pollet, and C. Ullrich. Activemath: A generic and adaptive web-based learning environment. Journal of Artificial Intelligence and Education, 12(4): , [7] Jose C. Principe, Neil R. Euliano, and W. Curt Lefebvre. Innovating adaptive and neural systems instruction with interactive electronic books. Proc. IEEE, 88(1):81 95, [8] J. Vassileva. Dynamic courseware generation at the www. In Proc. of the 8th World Conference on AI and Education (AIED 97), Kobe, Japan,

16 Tutorial Dialogs on Mathematical Proofs Christoph Benzmüller 1, Armin Fiedler 1, Malte Gabsdil 2, Helmut Horacek 1, Ivana Kruijff-Korbayová 2, Manfred Pinkal 2,Jörg Siekmann 1, Dimitra Tsovaltzi 2, Bao Quoc Vo 1, Magdalena Wolska 2 1 Fachrichtung Informatik 2 Fachrichtung Computerlinguistik Universität des Saarlandes, Postfach , D Saarbrücken, Germany Abstract The representation of knowledge for a mathematical proof assistant is generally used exclusively for the purpose of proving theorems. Aiming at a broader scope, we examine the use of mathematical knowledge in a mathematical tutoring system with flexible natural language dialog. Based on an analysis of a corpus of dialogs we collected with a simulated tutoring system for teaching proofs in naive set theory, we identify several interesting problems which lead to requirements for mathematical knowledge representation. This includes resolving reference between natural language expressions and mathematical formulas, determining the semantic role of mathematical formulas in context, and determining the contribution of inference steps specifiedbytheuser. 1 Introduction In a mathematical proof assistant (MPA), knowledge representation (if any) is used for the purpose of proving theorems. State-of-the-art MPAs such as COQ, NUPRL, MIZAR, ISABELLE-HOL, PVS and ΩMEGA usually provide a combination of proof automation and facilities for user interaction and most of them are connected to a structured mathematical knowledge base. In spite of their common purpose (proving theorems), the heterogeneity of MPAs (they are based on different logics, calculi, semantics, representations of proofs, etc.) poses a challenge for the communication of mathematical knowledge between them, and most importantly, a common ontology and semantics are missing. Some of these issues are currently investigated This work is supported by the SFB 378 at Saarland University, Saarbrücken, and the EU training network CALCULEMUS (HPRN-CT ) funded in the EU 5th framework. in the Mathematical Knowledge Management research initiative [4]. However, appropriate knowledge representation in MPAs to support the search for a proof is only one of the issues to be addressed in the future of computer-aided mathematics, and in computer-aided mathematical education in particular. Among the challenges involved in humanoriented automated proving is the coupling of MPAs with natural language processing. This in turn gives rise to additional requirements on knowledge representation. For example, it has been shown in [9] that the mathematical domain representation as used for proof search and proof planning is not sufficient for the purpose of proof presentation. Some methods for more natural references to rules have been demonstrated in [16]. In this paper, we present further requirements on mathematical knowledge representation for the purpose of handling flexible natural language dialog in a mathematical tutoring system. Our discussion is based on data we collected through experiments with a simulated tutoring dialog system for teaching proofs in naive set theory. Some state-of-the-art tutorial systems allow limited dialog, where the input is either menu-based or requires exact wording [24; 2; 13]. This contrasts with Moore s empirical findings showing that flexible natural language dialog is needed to support active learning [23]. The latter approach is taken for example in the CIRCSIM-Tutor project [22] which aims to build a natural language-based tutoring system for first-year medical students to learn about the reflex control of blood pressure. The goal of our project is to develop a mathematical tutoring system with flexible natural language dialog to support mathematical problem solving. We employ a modular approach keeping a strict separation between the different kinds of knowledge involved in the processing. The design of the system components is informed by the analysis of a corpus of tutorial dialog data we collected in an experiment. The outline of this paper is as follows. We first

17 present the aims of our project, illustrate the current application scenario and motivate the choice of the mathematical domain. The modeling of static and dynamic knowledge within this domain is our first contribution. Next, we describe an experiment in which we collected a corpus of natural language tutorial dialogs in the chosen mathematical domain. On the basis of the analysis of our corpus we then present the key requirements and challenges for the representation of mathematical knowledge and the design of a mathematical reasoning tool. 2 The DIALOG Project The goal of the DIALOG project 1 [25] is (i) to empirically investigate the use of flexible natural language dialog in tutoring mathematics, and (ii) to develop an experimental prototype system gradually embodying the empirical findings. The experimental system will engage in a dialog in written natural language (and later also in multimodal forms of communication based on diagrams, spoken language and animated mathematical function displays) to help a student understand and construct mathematical proofs. The overall scenario for the system is illustrated in Figure 1. We describe its components below. Learning Environment In our scenario, the student takes an interactive course in some field of mathematics within a web-based learning environment. We use ACTIVEMATH [19; 21], a generic web-based learning system that dynamically generates interactive (mathematical) courses adapted to the student s goals, preferences, capabilities, and knowledge. It enables a student to select the material he/she wants to study and to review his/her knowledge about the subject matter. After finishing a learning unit the student may opt for an interactive exercise session to actively apply what he/she has learned. It is primarily the interactive exercises that we aim to enrich with the possibility of flexible tutoring dialog using natural language. The features of ACTIVEMATH include: user modeling and monitoring facilities; user-adapted content selection, sequencing, and presentation; support of active and exploratory learning by external tools; use of (mathematical) problem solving methods, and re-usability of the encoded content as well as interoperability between systems. ACTIVEMATH maintains a dynamically updated student model (SM) containing information about the axioms, definitions, theorems (hence the assertions) and the proof techniques the student has studied and mastered so 1 The DIALOG project is part of the Collaborative Research Center on Resource-Adaptive Cognitive Processes (SFB 378) at University of the Saarland [26]. far. 2 This information will be used also by the tutoring dialog system. In addition, we also assume an idealized student model (ISM) set up by the author of the learning unit, which specifies the mathematical material a student ideally should know after studying the unit. Mathematical Proof Assistant The MPA is used for the problem-solving in the mathematical domain underlying the dialogs. This involves the verification (or falsification) of user specified inference steps and checking whether the application of an inference step leads to a proof state from which a complete proof can be obtained. Mathematical tutorial dialogs thus require (i) stepwise interactive as well as (ii) automated proof construction at a human-oriented level of abstraction. Ideally, these are provided by the MPA. In addition, it should be possible to control the proof strategy used by the MPA (depending on the target of the tutorial session), and the proof(s) constructed by the MPA should only exploit the mathematical knowledge that the student possesses, that is, it should be possible to control the mathematical knowledge used in the proof(s) in accordance to the respective SM and ISM. The ΩMEGA system [29] with its advanced proof presentation and proof planning facilities provides an adequate starting point for integrating an MPA in our scenario. Proof Manager In the course of the interactive tutorial session, the user may explore alternative proofs, or make various attempts at constructing a valid proof, involving both valid and invalid inference steps. In addition, tutoring may require the possibility to compare the problem-solving attempts made by the user with target master proofs. The student s problem-solving attempts with respect to the proof space need to be monitored for the sake of managing the dialog flow. It is the task of the proof manager in our scenario to provide this interface and additional book-keeping between the MPA and the dialog manager. Dialog Manager When the student enters a tutorial dialog session, the interaction is handled by the dialog manager. We employ the Information- State (IS) Update approach to dialog management developed in the TRINDI and SIRIDUS projects [28; 27]. The IS is a centrally maintained data structure which contains a representation of the information accumulated as the dialog progresses, including (i) 2 ACTIVEMATH keeps track of what material the student has studied and for how long [20]. It also lets the student skip material he is confident to know well already. 13

18 USER GENERATION ANALYSIS LINGUISTIC RESOURCES DIALOG MANAGER USER MODEL PEDAGOGICAL KNOWLEDGE DIALOG RESOURCES PROOF MANAGER LEARNING ENVIRONMENT MATHEMATICAL KNOWLEDGE (MBASE) MATHEMATICAL PROOF ASSISTANT ACTIVEMATH OMEGA Figure 1: DIALOG project scenario. private information of the system, and (ii) the information considered to be shared between the system and the user. A dialog is modeled as a sequence of dialog moves each of which is a transition from one information state to the next one. The system interprets each user s utterance with respect to the current IS, and then computes a transition to a new IS. When it is the system s turn, the next move is selected according to the IS at that point, the corresponding utterance is produced, and again the IS is updated. The dialog manager relies on the input analysis and output generation modules to exchange data between the user and the system; it further relies on the proof-manager to monitor the mathematical problem-solving and to access the MPA. Knowledge Resources The static knowledge in our scenario comprises linguistic resources, dialog resources, pedagogical knowledge, and mathematical knowledge. The dynamic knowledge includes the SM and ISM mentioned above, as well as the information state maintained by the dialog manager. The linguistic resources include the grammar and the lexicon used for analyzing the natural language input and generating the output. We combine the use of generic, domain independent resources with resources specific to the particular area of mathematics being taught. The static dialog resources include (i) dialog move selection rules (i.e., rules that determine what dialog move the system will make next, given the current information state and a communicative goal), and (ii) dialog information-state update rules (i.e., rules that dynamically change the information state depending on the dialog moves the user or the system have successfully made). We distinguish between domain independent, generic dialog moves, such as meta-communication moves (used for, e.g., clarification and correction), and domain-specific ones, such as various kinds of hinting moves [11; 31], which may be further specialized for tutoring in the matematics domain. The pedagogical knowledge specifies generic and domain-specific teaching strategies. This includes the specification of the didactic versus socratic teaching methods. Also the hinting dialog moves mentioned above are derived from the pedagogical knowledge. Finally, the static mathematical knowledge consists of assertions (i.e., axioms, lemmata, theorems), domain dependent proof rules and methods, corresponding diagrammatic illustrations as 14

19 well as selected completed master proofs. This mathematical knowledge is typically highly structured into mathematical sub-domains and it usually forms a dependency/inheritance graph. Examples of systems maintaining structured corpora of formalized mathematics are MIZAR with its mathematical library [5], NuPrl s knowledge base [3] and the MBase system [18], which is the system of choice in our project. An essential requirement in our scenario is that the mathematical knowledge is shared between the learning environment, the DIALOG system, and the mathematical assistant. One problem in many current proof systems is to guarantee consistent handling and data flow between the declarative and the procedural view of assertions. In [32], we suggest a solution that uses declarative entries in the mathematical knowledge base to automatically generate all potential procedural views from these declarative entries for each given proof context. We already mentioned that there may be a limited number of fixed master proofs for the proof exercises to be employed in guiding the tutorial session. These can be statically maintained in the mathematical knowledge base. Generally, however, there are infinitely many variants of proofs for a mathematical theorem and a significant number of these proofs is acceptable for being tutored relative to the knowledge and capabilities of the student. We therefore couple the static modeling of a well chosen set of master proofs with the dynamical verification of single inference steps and the dynamic generation of proofs by the MPA. The SM (and the ISM) refer to the mathematical knowledge base in the sense that they maintain, for each student, a view on this knowledge base, separating the known from the unknown content. An additional teacher model could provide information such as a specification of the dominant and the subdominant mathematical concepts of a learning unit. Note that the structure imposed by the latter information is likely to differ from the hierarchical structure of the knowledge base itself [30]. Our Current Domain of Choice: Naive Set Theory For the first phase of the project we chose naive set theory as the mathematical domain of interest. We integrated a course on naive set theory into ACTIVEMATH. Basic notions (e.g., set) and definitions (e.g., subset), or set operations, (e.g., union, intersection, set complement, power set) are structurally represented in this course. Typical examples are presented after each definition, for the student to get a good intuition about the more abstract concepts. Students are also exposed to Venn diagrams which provide an intuitive understanding of set operations. Throughout the course, the student is continuously introduced to the more important properties of this domain, for example, laws of commutativity, associativity, distributivity, or de Morgan laws. The Naive Set Theory domain has several advantages: (i) The problems in this domain are almost always automatically provable [6; 7]. (ii) The domain is not too complex for the intended users (i.e., first year students). (iii) Simple problems are typically even decidable, so that wrong proof steps can be detected by the generation of counterexamples with a model generator [6]. (iv) The domain provides interesting opportunities for multi-modal interaction using the Venn and Spider diagrams. 3 (Sound and complete inference systems exist for the representation layer of Spider diagrams; cf. [14] and the references therein.) The disadvantages of this domain are: (i) Its modeling is built directly on predicate logic without higher-level concepts and fields of mathematics on many intermediate layers between the base logic and the domain itself. Hence, there are no hierarchical dependencies on other mathematical subdomains, such as real numbers, continuous functions, Abelian groups, etc. (ii) Consequently, the hierarchical expansion depth of proof plans and proofs is also relatively low. Although this raised some initial doubts about the suitability the naive set theory domain, the experiment described in the next section revealed that even such a relatively simple mathematical domain has sufficient complexity to allow meaningful tutorial dialog sessions. We shall, however, also consider more complex mathematical domains in future experiments. 3 Empirical Study We conducted a Wizard-of-Oz (WOz) experiment in order to collect a corpus of tutorial dialogs in the naive set theory domain. We implemented a tool to support the experiment and collect the dialog data on-line [10]. In a WOz experiment, the subject interacts through an interface with a human wizard simulating the behavior of a system [8]. The WOz methodology is commonly used to investigate human-computer interaction in systems under development. One of the reasons for using a WOz setting rather than a human tutor is that it has been observed that humans interact differently with computers than with other humans. Another reason is that the tutor should follow the specific algorithm(s), which we are implementing in our system. In this way the dialog data we collect (i) represents the users behavior in interactions following these algorithms and (ii) provides early feedback on the algorithms. In subsequent experiments 3 These aspects are, however, not subject of this paper and will be considered in later experiments. 15

20 Declarative View Procedural View Diagrammatic View A B e ¾ A e ¾ A B ¾- -IL x A union B e A B e ¾ A µ e ¾ A Bµ A B C A B µ A B Cµ A B A Bµ µ A ¾ Bµµ A B A B C - -IL A B x C B union C A B A ¾ Bµ -I? Figure 2: Declarative, procedural, and diagrammatic knowledge in the domain of naive set theory. in the project, implemented components can substitute for some of the tasks now carried out by the wizard, while preserving the overall experimental setup. We invited 24 subjects to participate in the experiment. They were students with educational background in humanities (e.g., law, economy, various languages, psychology) or sciences (e.g., biology, chemistry, computer science, computational linguistics). Their prior mathematical knowledge ranged from little to fair. For each subject, the experiment consisted of the following phases (each of which had a fixed maximum duration): (1) Preparation and pre-test: First, the subject filled in a background questionnaire. Then he/she studied written lesson material, explaining basic concepts and providing a collection of six lemmata about properties of sets and eleven lemmata about properties of powersets. 4. Finally he/she was asked to prove (on paper) the theorem K Aµ ¾ P K A Bµµ. (2) Tutoring session: The subject was asked to evaluate a tutoring system with natural language dialog capabilities. He/She was given three theorems to prove: The theorem K((A B) (C D)) = (K(A) K(B)) (K(C) K(D)) was used first to let the subject familiarize himself/herself with the system s interface. Then two more complex theorems were presented (in different order to different subjects): (a) A B ¾ P((A C) (B C)) (b) Wenn A K(B), dann B K(A). The interface enabled the subject to type text or insert mathematical symbols by clicking on buttons; it also displayed the complete dialog with both the tutor s and the subject s utterances. The 4 In the first experiment the lesson material was still presented on paper, not through the ACTIVEMATH system. subject was instructed to enter partial steps of a proof rather than the complete proof as a whole, in order to enable a dialog with the system. (3) Post-test and evaluation questionnaire: The subject was asked to write down (on paper) a proof for one more theorem. 5 To conclude the experiment, he/she was asked to fill in a questionnaire addressing various aspects of the system and its usability. The tutor-wizard s task was to respond to the student s utterances following a given algorithm. The wizard first classified the completeness, accuracy, and relevance of the subject s utterance with respect to a valid proof of the theorem at hand. Then, the wizard decided what dialog moves to make next and verbalized them. Depending on the tutoring strategy employed by the wizard for a given subject, the dialog move options included informing the subject about completeness, accuracy, and relevance of the utterance, giving hints on how to proceed further, explaining a step under consideration, prompting for the next step, or entering into a clarification dialog. The wizard was free to mix text with formulas [11]. 4 A Preliminary Analysis of the Test Dialogs In this section, we examine the issues involved in the natural language analysis of the dialog utterances containing mathematical expressions and the role of mathematical domain knowledge. Examples of dialog utterances that illustrate the phenomena addressed by the analysis below are shown in Figure 3 (the original German versions of ut- 5 The comparison of the student s performance of the pre-test and post- test proofs serve to evaluate their learning gain from the tutoring session. 16

Exercise Design and Implementation in Active Math

Exercise Design and Implementation in Active Math Interactivity of Exercises in ActiveMath Giorgi Goguadze a, Alberto González Palomo b, Erica Melis c, a University of Saarland, Saarbrücken, Germany b German Research Institute for Artificial Intelligence

More information

An Intelligent Sales Assistant for Configurable Products

An Intelligent Sales Assistant for Configurable Products An Intelligent Sales Assistant for Configurable Products Martin Molina Department of Artificial Intelligence, Technical University of Madrid Campus de Montegancedo s/n, 28660 Boadilla del Monte (Madrid),

More information

Learning Mathematics with

Learning Mathematics with Deutsches Forschungszentrum für f r Künstliche K Intelligenz Learning Mathematics with Jörg Siekmann German Research Centre for Artificial Intelligence DFKI Universität des Saarlandes e-learning: Systems

More information

System Description: The MathWeb Software Bus for Distributed Mathematical Reasoning

System Description: The MathWeb Software Bus for Distributed Mathematical Reasoning System Description: The MathWeb Software Bus for Distributed Mathematical Reasoning Jürgen Zimmer 1 and Michael Kohlhase 2 1 FB Informatik, Universität des Saarlandes jzimmer@mathweb.org 2 School of Computer

More information

A Virtual Assistant for Web-Based Training In Engineering Education

A Virtual Assistant for Web-Based Training In Engineering Education A Virtual Assistant for Web-Based Training In Engineering Education Frédéric Geoffroy (1), Esma Aimeur (2), and Denis Gillet (1) (1) Swiss Federal Institute of Technology in Lausanne (EPFL) LA-I2S-STI,

More information

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

More information

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02) Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we

More information

Automated Theorem Proving - summary of lecture 1

Automated Theorem Proving - summary of lecture 1 Automated Theorem Proving - summary of lecture 1 1 Introduction Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement is a logical consequence of

More information

A Framework for the Delivery of Personalized Adaptive Content

A Framework for the Delivery of Personalized Adaptive Content A Framework for the Delivery of Personalized Adaptive Content Colm Howlin CCKF Limited Dublin, Ireland colm.howlin@cckf-it.com Danny Lynch CCKF Limited Dublin, Ireland colm.howlin@cckf-it.com Abstract

More information

Supporting Active Database Learning and Training through Interactive Multimedia

Supporting Active Database Learning and Training through Interactive Multimedia Supporting Active Database Learning and Training through Interactive Multimedia Claus Pahl ++353 +1 700 5620 cpahl@computing.dcu.ie Ronan Barrett ++353 +1 700 8616 rbarrett@computing.dcu.ie Claire Kenny

More information

Five High Order Thinking Skills

Five High Order Thinking Skills Five High Order Introduction The high technology like computers and calculators has profoundly changed the world of mathematics education. It is not only what aspects of mathematics are essential for learning,

More information

Standardization of Components, Products and Processes with Data Mining

Standardization of Components, Products and Processes with Data Mining B. Agard and A. Kusiak, Standardization of Components, Products and Processes with Data Mining, International Conference on Production Research Americas 2004, Santiago, Chile, August 1-4, 2004. Standardization

More information

Interactive Ontology-Based User Modeling for Personalized Learning Content Management

Interactive Ontology-Based User Modeling for Personalized Learning Content Management Interactive Ontology-Based User Modeling for Personalized Learning Content Management Ronald Denaux 1, Vania Dimitrova 2, Lora Aroyo 1 1 Computer Science Department, Eindhoven Univ. of Technology, The

More information

AN INTELLIGENT TUTORING SYSTEM FOR LEARNING DESIGN PATTERNS

AN INTELLIGENT TUTORING SYSTEM FOR LEARNING DESIGN PATTERNS AN INTELLIGENT TUTORING SYSTEM FOR LEARNING DESIGN PATTERNS ZORAN JEREMIĆ, VLADAN DEVEDŽIĆ, DRAGAN GAŠEVIĆ FON School of Business Administration, University of Belgrade Jove Ilića 154, POB 52, 11000 Belgrade,

More information

8. KNOWLEDGE BASED SYSTEMS IN MANUFACTURING SIMULATION

8. KNOWLEDGE BASED SYSTEMS IN MANUFACTURING SIMULATION - 1-8. KNOWLEDGE BASED SYSTEMS IN MANUFACTURING SIMULATION 8.1 Introduction 8.1.1 Summary introduction The first part of this section gives a brief overview of some of the different uses of expert systems

More information

ONLINE EXERCISE SYSTEM A Web-Based Tool for Administration and Automatic Correction of Exercises

ONLINE EXERCISE SYSTEM A Web-Based Tool for Administration and Automatic Correction of Exercises ONLINE EXERCISE SYSTEM A Web-Based Tool for Administration and Automatic Correction of Exercises Daniel Baudisch, Manuel Gesell and Klaus Schneider Embedded Systems Group, University of Kaiserslautern,

More information

131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10

131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10 1/10 131-1 Adding New Level in KDD to Make the Web Usage Mining More Efficient Mohammad Ala a AL_Hamami PHD Student, Lecturer m_ah_1@yahoocom Soukaena Hassan Hashem PHD Student, Lecturer soukaena_hassan@yahoocom

More information

Data Validation with OWL Integrity Constraints

Data Validation with OWL Integrity Constraints Data Validation with OWL Integrity Constraints (Extended Abstract) Evren Sirin Clark & Parsia, LLC, Washington, DC, USA evren@clarkparsia.com Abstract. Data validation is an important part of data integration

More information

Personalized e-learning a Goal Oriented Approach

Personalized e-learning a Goal Oriented Approach Proceedings of the 7th WSEAS International Conference on Distance Learning and Web Engineering, Beijing, China, September 15-17, 2007 304 Personalized e-learning a Goal Oriented Approach ZHIQI SHEN 1,

More information

2 AIMS: an Agent-based Intelligent Tool for Informational Support

2 AIMS: an Agent-based Intelligent Tool for Informational Support Aroyo, L. & Dicheva, D. (2000). Domain and user knowledge in a web-based courseware engineering course, knowledge-based software engineering. In T. Hruska, M. Hashimoto (Eds.) Joint Conference knowledge-based

More information

Extending Semantic Resolution via Automated Model Building: applications

Extending Semantic Resolution via Automated Model Building: applications Extending Semantic Resolution via Automated Model Building: applications Ricardo Caferra Nicolas Peltier LIFIA-IMAG L1F1A-IMAG 46, Avenue Felix Viallet 46, Avenue Felix Viallei 38031 Grenoble Cedex FRANCE

More information

CHAPTER 7 GENERAL PROOF SYSTEMS

CHAPTER 7 GENERAL PROOF SYSTEMS CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes

More information

A Pattern-based Framework of Change Operators for Ontology Evolution

A Pattern-based Framework of Change Operators for Ontology Evolution A Pattern-based Framework of Change Operators for Ontology Evolution Muhammad Javed 1, Yalemisew M. Abgaz 2, Claus Pahl 3 Centre for Next Generation Localization (CNGL), School of Computing, Dublin City

More information

Facilitating Knowledge Intelligence Using ANTOM with a Case Study of Learning Religion

Facilitating Knowledge Intelligence Using ANTOM with a Case Study of Learning Religion Facilitating Knowledge Intelligence Using ANTOM with a Case Study of Learning Religion Herbert Y.C. Lee 1, Kim Man Lui 1 and Eric Tsui 2 1 Marvel Digital Ltd., Hong Kong {Herbert.lee,kimman.lui}@marvel.com.hk

More information

C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering

C. Wohlin and B. Regnell, Achieving Industrial Relevance in Software Engineering Education, Proceedings Conference on Software Engineering C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering Education & Training, pp. 16-25, New Orleans, Lousiana, USA,

More information

Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

More information

(Refer Slide Time: 01:52)

(Refer Slide Time: 01:52) Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This

More information

A Knowledge-based Product Derivation Process and some Ideas how to Integrate Product Development

A Knowledge-based Product Derivation Process and some Ideas how to Integrate Product Development A Knowledge-based Product Derivation Process and some Ideas how to Integrate Product Development (Position paper) Lothar Hotz and Andreas Günter HITeC c/o Fachbereich Informatik Universität Hamburg Hamburg,

More information

Semantic Search in Portals using Ontologies

Semantic Search in Portals using Ontologies Semantic Search in Portals using Ontologies Wallace Anacleto Pinheiro Ana Maria de C. Moura Military Institute of Engineering - IME/RJ Department of Computer Engineering - Rio de Janeiro - Brazil [awallace,anamoura]@de9.ime.eb.br

More information

Component visualization methods for large legacy software in C/C++

Component visualization methods for large legacy software in C/C++ Annales Mathematicae et Informaticae 44 (2015) pp. 23 33 http://ami.ektf.hu Component visualization methods for large legacy software in C/C++ Máté Cserép a, Dániel Krupp b a Eötvös Loránd University mcserep@caesar.elte.hu

More information

A Tool for Generating Partition Schedules of Multiprocessor Systems

A Tool for Generating Partition Schedules of Multiprocessor Systems A Tool for Generating Partition Schedules of Multiprocessor Systems Hans-Joachim Goltz and Norbert Pieth Fraunhofer FIRST, Berlin, Germany {hans-joachim.goltz,nobert.pieth}@first.fraunhofer.de Abstract.

More information

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing CS Master Level Courses and Areas The graduate courses offered may change over time, in response to new developments in computer science and the interests of faculty and students; the list of graduate

More information

Analysing the Behaviour of Students in Learning Management Systems with Respect to Learning Styles

Analysing the Behaviour of Students in Learning Management Systems with Respect to Learning Styles Analysing the Behaviour of Students in Learning Management Systems with Respect to Learning Styles Sabine Graf and Kinshuk 1 Vienna University of Technology, Women's Postgraduate College for Internet Technologies,

More information

Baseline Code Analysis Using McCabe IQ

Baseline Code Analysis Using McCabe IQ White Paper Table of Contents What is Baseline Code Analysis?.....2 Importance of Baseline Code Analysis...2 The Objectives of Baseline Code Analysis...4 Best Practices for Baseline Code Analysis...4 Challenges

More information

ONTOLOGY FOR MOBILE PHONE OPERATING SYSTEMS

ONTOLOGY FOR MOBILE PHONE OPERATING SYSTEMS ONTOLOGY FOR MOBILE PHONE OPERATING SYSTEMS Hasni Neji and Ridha Bouallegue Innov COM Lab, Higher School of Communications of Tunis, Sup Com University of Carthage, Tunis, Tunisia. Email: hasni.neji63@laposte.net;

More information

Selbo 2 an Environment for Creating Electronic Content in Software Engineering

Selbo 2 an Environment for Creating Electronic Content in Software Engineering BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 3 Sofia 2009 Selbo 2 an Environment for Creating Electronic Content in Software Engineering Damyan Mitev 1, Stanimir

More information

Program Visualization for Programming Education Case of Jeliot 3

Program Visualization for Programming Education Case of Jeliot 3 Program Visualization for Programming Education Case of Jeliot 3 Roman Bednarik, Andrés Moreno, Niko Myller Department of Computer Science University of Joensuu firstname.lastname@cs.joensuu.fi Abstract:

More information

A Slot Representation of the Resource-Centric Models for Scheduling Problems

A Slot Representation of the Resource-Centric Models for Scheduling Problems A Slot Representation of the Resource-Centric Models for Scheduling Problems Roman Barták * Charles University, Faculty of Mathematics and Physics Department of Theoretical Computer Science Malostranské

More information

Task-Model Driven Design of Adaptable Educational Hypermedia

Task-Model Driven Design of Adaptable Educational Hypermedia Task-Model Driven Design of Adaptable Educational Hypermedia Huberta Kritzenberger, Michael Herczeg Institute for Multimedia and Interactive Systems University of Luebeck Seelandstr. 1a, D-23569 Luebeck,

More information

Abstraction in Computer Science & Software Engineering: A Pedagogical Perspective

Abstraction in Computer Science & Software Engineering: A Pedagogical Perspective Orit Hazzan's Column Abstraction in Computer Science & Software Engineering: A Pedagogical Perspective This column is coauthored with Jeff Kramer, Department of Computing, Imperial College, London ABSTRACT

More information

Binary Coded Web Access Pattern Tree in Education Domain

Binary Coded Web Access Pattern Tree in Education Domain Binary Coded Web Access Pattern Tree in Education Domain C. Gomathi P.G. Department of Computer Science Kongu Arts and Science College Erode-638-107, Tamil Nadu, India E-mail: kc.gomathi@gmail.com M. Moorthi

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION Exploration is a process of discovery. In the database exploration process, an analyst executes a sequence of transformations over a collection of data structures to discover useful

More information

The Student-Project Allocation Problem

The Student-Project Allocation Problem The Student-Project Allocation Problem David J. Abraham, Robert W. Irving, and David F. Manlove Department of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK Email: {dabraham,rwi,davidm}@dcs.gla.ac.uk.

More information

Tableaux Modulo Theories using Superdeduction

Tableaux Modulo Theories using Superdeduction Tableaux Modulo Theories using Superdeduction An Application to the Verification of B Proof Rules with the Zenon Automated Theorem Prover Mélanie Jacquel 1, Karim Berkani 1, David Delahaye 2, and Catherine

More information

Tool Support for Software Variability Management and Product Derivation in Software Product Lines

Tool Support for Software Variability Management and Product Derivation in Software Product Lines Tool Support for Software Variability Management and Product Derivation in Software s Hassan Gomaa 1, Michael E. Shin 2 1 Dept. of Information and Software Engineering, George Mason University, Fairfax,

More information

COURSE RECOMMENDER SYSTEM IN E-LEARNING

COURSE RECOMMENDER SYSTEM IN E-LEARNING International Journal of Computer Science and Communication Vol. 3, No. 1, January-June 2012, pp. 159-164 COURSE RECOMMENDER SYSTEM IN E-LEARNING Sunita B Aher 1, Lobo L.M.R.J. 2 1 M.E. (CSE)-II, Walchand

More information

Does Artificial Tutoring foster Inquiry Based Learning?

Does Artificial Tutoring foster Inquiry Based Learning? Vol. 25, Issue 1, 2014, 123-129 Does Artificial Tutoring foster Inquiry Based Learning? ALEXANDER SCHMOELZ *, CHRISTIAN SWERTZ, ALEXANDRA FORSTNER, ALESSANDRO BARBERI ABSTRACT: This contribution looks

More information

Collecting Polish German Parallel Corpora in the Internet

Collecting Polish German Parallel Corpora in the Internet Proceedings of the International Multiconference on ISSN 1896 7094 Computer Science and Information Technology, pp. 285 292 2007 PIPS Collecting Polish German Parallel Corpora in the Internet Monika Rosińska

More information

CHANCE ENCOUNTERS. Making Sense of Hypothesis Tests. Howard Fincher. Learning Development Tutor. Upgrade Study Advice Service

CHANCE ENCOUNTERS. Making Sense of Hypothesis Tests. Howard Fincher. Learning Development Tutor. Upgrade Study Advice Service CHANCE ENCOUNTERS Making Sense of Hypothesis Tests Howard Fincher Learning Development Tutor Upgrade Study Advice Service Oxford Brookes University Howard Fincher 2008 PREFACE This guide has a restricted

More information

Operations Research and Knowledge Modeling in Data Mining

Operations Research and Knowledge Modeling in Data Mining Operations Research and Knowledge Modeling in Data Mining Masato KODA Graduate School of Systems and Information Engineering University of Tsukuba, Tsukuba Science City, Japan 305-8573 koda@sk.tsukuba.ac.jp

More information

Facilitating Business Process Discovery using Email Analysis

Facilitating Business Process Discovery using Email Analysis Facilitating Business Process Discovery using Email Analysis Matin Mavaddat Matin.Mavaddat@live.uwe.ac.uk Stewart Green Stewart.Green Ian Beeson Ian.Beeson Jin Sa Jin.Sa Abstract Extracting business process

More information

Mathematical Reasoning in Software Engineering Education. Peter B. Henderson Butler University

Mathematical Reasoning in Software Engineering Education. Peter B. Henderson Butler University Mathematical Reasoning in Software Engineering Education Peter B. Henderson Butler University Introduction Engineering is a bridge between science and mathematics, and the technological needs of mankind.

More information

Managing Software Evolution through Reuse Contracts

Managing Software Evolution through Reuse Contracts VRIJE UNIVERSITEIT BRUSSEL Vrije Universiteit Brussel Faculteit Wetenschappen SCI EN T I A V INCERE T ENE BRA S Managing Software Evolution through Reuse Contracts Carine Lucas, Patrick Steyaert, Kim Mens

More information

How To Use Data Mining For Knowledge Management In Technology Enhanced Learning

How To Use Data Mining For Knowledge Management In Technology Enhanced Learning Proceedings of the 6th WSEAS International Conference on Applications of Electrical Engineering, Istanbul, Turkey, May 27-29, 2007 115 Data Mining for Knowledge Management in Technology Enhanced Learning

More information

UPDATES OF LOGIC PROGRAMS

UPDATES OF LOGIC PROGRAMS Computing and Informatics, Vol. 20, 2001,????, V 2006-Nov-6 UPDATES OF LOGIC PROGRAMS Ján Šefránek Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University,

More information

Coverability for Parallel Programs

Coverability for Parallel Programs 2015 http://excel.fit.vutbr.cz Coverability for Parallel Programs Lenka Turoňová* Abstract We improve existing method for the automatic verification of systems with parallel running processes. The technique

More information

Introducing Formal Methods. Software Engineering and Formal Methods

Introducing Formal Methods. Software Engineering and Formal Methods Introducing Formal Methods Formal Methods for Software Specification and Analysis: An Overview 1 Software Engineering and Formal Methods Every Software engineering methodology is based on a recommended

More information

A Review of Data Mining Techniques

A Review of Data Mining Techniques Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 4, April 2014,

More information

Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina

Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina Graduate Co-op Students Information Manual Department of Computer Science Faculty of Science University of Regina 2014 1 Table of Contents 1. Department Description..3 2. Program Requirements and Procedures

More information

72. Ontology Driven Knowledge Discovery Process: a proposal to integrate Ontology Engineering and KDD

72. Ontology Driven Knowledge Discovery Process: a proposal to integrate Ontology Engineering and KDD 72. Ontology Driven Knowledge Discovery Process: a proposal to integrate Ontology Engineering and KDD Paulo Gottgtroy Auckland University of Technology Paulo.gottgtroy@aut.ac.nz Abstract This paper is

More information

Modeling the User Interface of Web Applications with UML

Modeling the User Interface of Web Applications with UML Modeling the User Interface of Web Applications with UML Rolf Hennicker,Nora Koch,2 Institute of Computer Science Ludwig-Maximilians-University Munich Oettingenstr. 67 80538 München, Germany {kochn,hennicke}@informatik.uni-muenchen.de

More information

Ontological Representations of Software Patterns

Ontological Representations of Software Patterns Ontological Representations of Software Patterns Jean-Marc Rosengard and Marian F. Ursu University of London http://w2.syronex.com/jmr/ Abstract. This paper 1 is based on and advocates the trend in software

More information

A Variability Viewpoint for Enterprise Software Systems

A Variability Viewpoint for Enterprise Software Systems 2012 Joint Working Conference on Software Architecture & 6th European Conference on Software Architecture A Variability Viewpoint for Enterprise Software Systems Matthias Galster University of Groningen,

More information

Winter 2016 Course Timetable. Legend: TIME: M = Monday T = Tuesday W = Wednesday R = Thursday F = Friday BREATH: M = Methodology: RA = Research Area

Winter 2016 Course Timetable. Legend: TIME: M = Monday T = Tuesday W = Wednesday R = Thursday F = Friday BREATH: M = Methodology: RA = Research Area Winter 2016 Course Timetable Legend: TIME: M = Monday T = Tuesday W = Wednesday R = Thursday F = Friday BREATH: M = Methodology: RA = Research Area Please note: Times listed in parentheses refer to the

More information

Towards Rule-based System for the Assembly of 3D Bricks

Towards Rule-based System for the Assembly of 3D Bricks Universal Journal of Communications and Network 3(4): 77-81, 2015 DOI: 10.13189/ujcn.2015.030401 http://www.hrpub.org Towards Rule-based System for the Assembly of 3D Bricks Sanguk Noh School of Computer

More information

Adapting to the Level of Experience of the User in Mixed-Initiative Web Self-Service Applications

Adapting to the Level of Experience of the User in Mixed-Initiative Web Self-Service Applications Adapting to the Level of Experience of the User in Mixed-Initiative Web Self-Service Applications Mehmet H. Göker Kaidara Software 330 Distel Circle, Suite 150 Los Altos, CA 94022 mgoker@kaidara.com Abstract.

More information

Overview of the TACITUS Project

Overview of the TACITUS Project Overview of the TACITUS Project Jerry R. Hobbs Artificial Intelligence Center SRI International 1 Aims of the Project The specific aim of the TACITUS project is to develop interpretation processes for

More information

Distributed Database for Environmental Data Integration

Distributed Database for Environmental Data Integration Distributed Database for Environmental Data Integration A. Amato', V. Di Lecce2, and V. Piuri 3 II Engineering Faculty of Politecnico di Bari - Italy 2 DIASS, Politecnico di Bari, Italy 3Dept Information

More information

A Framework of Context-Sensitive Visualization for User-Centered Interactive Systems

A Framework of Context-Sensitive Visualization for User-Centered Interactive Systems Proceedings of 10 th International Conference on User Modeling, pp423-427 Edinburgh, UK, July 24-29, 2005. Springer-Verlag Berlin Heidelberg 2005 A Framework of Context-Sensitive Visualization for User-Centered

More information

SCORM Users Guide for Instructional Designers. Version 8

SCORM Users Guide for Instructional Designers. Version 8 SCORM Users Guide for Instructional Designers Version 8 September 15, 2011 Brief Table of Contents Chapter 1. SCORM in a Nutshell... 6 Chapter 2. Overview of SCORM... 15 Chapter 3. Structuring Instruction...

More information

Exploiting User and Process Context for Knowledge Management Systems

Exploiting User and Process Context for Knowledge Management Systems Workshop on User Modeling for Context-Aware Applications at the 8th Int. Conf. on User Modeling, July 13-16, 2001, Sonthofen, Germany Exploiting User and Process Context for Knowledge Management Systems

More information

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM Thanh-Nghi Do College of Information Technology, Cantho University 1 Ly Tu Trong Street, Ninh Kieu District Cantho City, Vietnam

More information

Reusable Knowledge-based Components for Building Software. Applications: A Knowledge Modelling Approach

Reusable Knowledge-based Components for Building Software. Applications: A Knowledge Modelling Approach Reusable Knowledge-based Components for Building Software Applications: A Knowledge Modelling Approach Martin Molina, Jose L. Sierra, Jose Cuena Department of Artificial Intelligence, Technical University

More information

Risk Knowledge Capture in the Riskit Method

Risk Knowledge Capture in the Riskit Method Risk Knowledge Capture in the Riskit Method Jyrki Kontio and Victor R. Basili jyrki.kontio@ntc.nokia.com / basili@cs.umd.edu University of Maryland Department of Computer Science A.V.Williams Building

More information

EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP

EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP Hong Jiang Mathematics & Computer Science Department, Benedict College, USA jiangh@benedict.edu ABSTRACT DCSP (Distributed Constraint Satisfaction Problem) has

More information

Implementing Systematic Requirements Management in a Large Software Development Programme

Implementing Systematic Requirements Management in a Large Software Development Programme Implementing Systematic Requirements Management in a Large Software Development Programme Caroline Claus, Michael Freund, Michael Kaiser, Ralf Kneuper 1 Transport-, Informatik- und Logistik-Consulting

More information

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network , pp.273-284 http://dx.doi.org/10.14257/ijdta.2015.8.5.24 Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network Gengxin Sun 1, Sheng Bin 2 and

More information

Course Outline Department of Computing Science Faculty of Science. COMP 3710-3 Applied Artificial Intelligence (3,1,0) Fall 2015

Course Outline Department of Computing Science Faculty of Science. COMP 3710-3 Applied Artificial Intelligence (3,1,0) Fall 2015 Course Outline Department of Computing Science Faculty of Science COMP 710 - Applied Artificial Intelligence (,1,0) Fall 2015 Instructor: Office: Phone/Voice Mail: E-Mail: Course Description : Students

More information

MEANINGS CONSTRUCTION ABOUT SAMPLING DISTRIBUTIONS IN A DYNAMIC STATISTICS ENVIRONMENT

MEANINGS CONSTRUCTION ABOUT SAMPLING DISTRIBUTIONS IN A DYNAMIC STATISTICS ENVIRONMENT MEANINGS CONSTRUCTION ABOUT SAMPLING DISTRIBUTIONS IN A DYNAMIC STATISTICS ENVIRONMENT Ernesto Sánchez CINVESTAV-IPN, México Santiago Inzunza Autonomous University of Sinaloa, México esanchez@cinvestav.mx

More information

Elementary School Mathematics Priorities

Elementary School Mathematics Priorities Elementary School Mathematics Priorities By W. Stephen Wilson Professor of Mathematics Johns Hopkins University and Former Senior Advisor for Mathematics Office of Elementary and Secondary Education U.S.

More information

Stabilization by Conceptual Duplication in Adaptive Resonance Theory

Stabilization by Conceptual Duplication in Adaptive Resonance Theory Stabilization by Conceptual Duplication in Adaptive Resonance Theory Louis Massey Royal Military College of Canada Department of Mathematics and Computer Science PO Box 17000 Station Forces Kingston, Ontario,

More information

CONFIOUS * : Managing the Electronic Submission and Reviewing Process of Scientific Conferences

CONFIOUS * : Managing the Electronic Submission and Reviewing Process of Scientific Conferences CONFIOUS * : Managing the Electronic Submission and Reviewing Process of Scientific Conferences Manos Papagelis 1, 2, Dimitris Plexousakis 1, 2 and Panagiotis N. Nikolaou 2 1 Institute of Computer Science,

More information

Improving Knowledge-Based System Performance by Reordering Rule Sequences

Improving Knowledge-Based System Performance by Reordering Rule Sequences Improving Knowledge-Based System Performance by Reordering Rule Sequences Neli P. Zlatareva Department of Computer Science Central Connecticut State University 1615 Stanley Street New Britain, CT 06050

More information

Functional Modelling in secondary schools using spreadsheets

Functional Modelling in secondary schools using spreadsheets Functional Modelling in secondary schools using spreadsheets Peter Hubwieser Techn. Universität München Institut für Informatik Boltzmannstr. 3, 85748 Garching Peter.Hubwieser@in.tum.de http://ddi.in.tum.de

More information

KNOWLEDGE ORGANIZATION

KNOWLEDGE ORGANIZATION KNOWLEDGE ORGANIZATION Gabi Reinmann Germany reinmann.gabi@googlemail.com Synonyms Information organization, information classification, knowledge representation, knowledge structuring Definition The term

More information

Chapter II. Controlling Cars on a Bridge

Chapter II. Controlling Cars on a Bridge Chapter II. Controlling Cars on a Bridge 1 Introduction The intent of this chapter is to introduce a complete example of a small system development. During this development, you will be made aware of the

More information

[Refer Slide Time: 05:10]

[Refer Slide Time: 05:10] Principles of Programming Languages Prof: S. Arun Kumar Department of Computer Science and Engineering Indian Institute of Technology Delhi Lecture no 7 Lecture Title: Syntactic Classes Welcome to lecture

More information

Understanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective

Understanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective Understanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective Sebastian Dennerlein Institute for Psychology, University of Graz, Universitätsplatz

More information

Security Issues for the Semantic Web

Security Issues for the Semantic Web Security Issues for the Semantic Web Dr. Bhavani Thuraisingham Program Director Data and Applications Security The National Science Foundation Arlington, VA On leave from The MITRE Corporation Bedford,

More information

Eastern Washington University Department of Computer Science. Questionnaire for Prospective Masters in Computer Science Students

Eastern Washington University Department of Computer Science. Questionnaire for Prospective Masters in Computer Science Students Eastern Washington University Department of Computer Science Questionnaire for Prospective Masters in Computer Science Students I. Personal Information Name: Last First M.I. Mailing Address: Permanent

More information

Demonstrating WSMX: Least Cost Supply Management

Demonstrating WSMX: Least Cost Supply Management Demonstrating WSMX: Least Cost Supply Management Eyal Oren 2, Alexander Wahler 1, Bernhard Schreder 1, Aleksandar Balaban 1, Michal Zaremba 2, and Maciej Zaremba 2 1 NIWA Web Solutions, Vienna, Austria

More information

Intelligent Log Analyzer. André Restivo <andre.restivo@portugalmail.pt>

Intelligent Log Analyzer. André Restivo <andre.restivo@portugalmail.pt> Intelligent Log Analyzer André Restivo 9th January 2003 Abstract Server Administrators often have to analyze server logs to find if something is wrong with their machines.

More information

HELP DESK SYSTEMS. Using CaseBased Reasoning

HELP DESK SYSTEMS. Using CaseBased Reasoning HELP DESK SYSTEMS Using CaseBased Reasoning Topics Covered Today What is Help-Desk? Components of HelpDesk Systems Types Of HelpDesk Systems Used Need for CBR in HelpDesk Systems GE Helpdesk using ReMind

More information

Bridging the Gap between Knowledge Management and E-Learning with Context-Aware Corporate Learning

Bridging the Gap between Knowledge Management and E-Learning with Context-Aware Corporate Learning Bridging the Gap between Knowledge Management and E-Learning with Context-Aware Corporate Learning Andreas Schmidt FZI Research Center for Information Technologies, Karlsruhe, GERMANY Andreas.Schmidt@fzi.de

More information

Towards Semantics-Enabled Distributed Infrastructure for Knowledge Acquisition

Towards Semantics-Enabled Distributed Infrastructure for Knowledge Acquisition Towards Semantics-Enabled Distributed Infrastructure for Knowledge Acquisition Vasant Honavar 1 and Doina Caragea 2 1 Artificial Intelligence Research Laboratory, Department of Computer Science, Iowa State

More information

Malay A. Dalal Madhav Erraguntla Perakath Benjamin. Knowledge Based Systems, Inc. (KBSI) College Station, TX 77840, U.S.A.

Malay A. Dalal Madhav Erraguntla Perakath Benjamin. Knowledge Based Systems, Inc. (KBSI) College Station, TX 77840, U.S.A. AN INTRODUCTION TO USING PROSIM FOR BUSINESS PROCESS SIMULATION AND ANALYSIS Malay A. Dalal Madhav Erraguntla Perakath Benjamin Knowledge Based Systems, Inc. (KBSI) College Station, TX 77840, U.S.A. ABSTRACT

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence ICS461 Fall 2010 1 Lecture #12B More Representations Outline Logics Rules Frames Nancy E. Reed nreed@hawaii.edu 2 Representation Agents deal with knowledge (data) Facts (believe

More information

Integrating Pattern Mining in Relational Databases

Integrating Pattern Mining in Relational Databases Integrating Pattern Mining in Relational Databases Toon Calders, Bart Goethals, and Adriana Prado University of Antwerp, Belgium {toon.calders, bart.goethals, adriana.prado}@ua.ac.be Abstract. Almost a

More information

Enforcing Data Quality Rules for a Synchronized VM Log Audit Environment Using Transformation Mapping Techniques

Enforcing Data Quality Rules for a Synchronized VM Log Audit Environment Using Transformation Mapping Techniques Enforcing Data Quality Rules for a Synchronized VM Log Audit Environment Using Transformation Mapping Techniques Sean Thorpe 1, Indrajit Ray 2, and Tyrone Grandison 3 1 Faculty of Engineering and Computing,

More information

Considering Learning Styles in Learning Management Systems: Investigating the Behavior of Students in an Online Course*

Considering Learning Styles in Learning Management Systems: Investigating the Behavior of Students in an Online Course* Considering Learning Styles in Learning Management Systems: Investigating the Behavior of Students in an Online Course* Sabine Graf Vienna University of Technology Women's Postgraduate College for Internet

More information