Fachberichte INFORMATIK


 Percival Robbins
 1 years ago
 Views:
Transcription
1 Knowledge Representation and Automated Reasoning for ELearning Systems Peter Baumgartner, Paul A. Cairns, Michael Kohlhase, Erica Melis (Eds.) 16/2003 Fachberichte INFORMATIK Universität KoblenzLandau Institut für Informatik, Universitätsstr. 1, D Koblenz WWW:
2
3 Foreword Numerous challenges have to be addressed when building elearning systems. This is witnessed by the enormous breadth of research in the AI in Education field, both with respect to intersection with neighbouring scientific disciplines (cognitive science, humancomputer interaction, etc.) and with respect to the developed tools and techniques. Elearning presents particular challenges as there is the need to represent the learning goals, the learning achievements and the domain knowledge. Also, learners need appropriate support at appropriate times in their learning. All of these require student models, comparison of models and retrieval of course materials. Furthermore, with a shift to webbased platforms, open and sharable knowledge representation schemes with a wellunderstood semantics and adequate reasoning services will become more and more important. The workshop focuses on the application of AIthemes, such as Knowledge Representation, Planning and Automated Reasoning as they apply to elearning environments. The purpose of the workshop is to bring together people that pursue this line of research in order to compare their approaches, stimulate further work, etc. We have therefore invited reports on system architectures, applications and experiences of use. Indeed, as the contributions to this proceedings show, there is considerable interest and activity in this direction. The organizers Organizers Peter Baumgartner Universität KoblenzLandau, Germany Paul A. Cairns University College London, UK Michael Kohlhase Carnegie Mellon University, USA Erica Melis Deutsches Forschungszentrum für Künstliche Intelligenz, Germany
4 Contents Esma Aïmeur, Gilles Brassard, Sébastien Gambs: Towards a New Knowledge Elicitation Algorithm 1 Peter Baumgartner, Ulrich Furbach, Margret GrossHardt, Alex Sinner: Living Book Deduction, Slicing, Interaction 8 Christoph Benzmüller, Armin Fiedler, Malte Gabsdil, Helmut Horacek, Ivana KruijffKorbayová, Manfred Pinkal, Jörg Siekmann, Dimitra Tsovaltzi, Bao Quoc Vo, Magdalena Wolska: Tutorial Dialogs on Mathematical Proofs 12 Armin Fiedler, Dimitra Tsovaltzi: Automating Hinting in an Intelligent Tutorial Dialog System for Mathematics 23 Alfredo Garro, Nicola Leone, Francesco Ricca: Logic Based Agents for Elearning 36 Marilza Antunes de Lemos, Leliane Nunes de Barros, Roseli de Deus Lopes: Modeling Plans and Goals in a Programming Intelligent Tutoring System 46 Permanand Mohan, Jim Greer, Gordon McCalla: Instructional Planning with Learning Objects 52 Andrew Potter: Invoking the CyberMuse: Automatic Essay Assessment in the Online Learning Environment 59 J.P. Spagnol: Modelling and Automation of Reasoning in Geometry. The ARGOS System: a Learning Companion for HighSchool Pupils 63 Tiffany Y. Tang, Gordon McCalla: Towards PedagogyOriented Paper Recommendations and Adaptive Annotations for a WebBased Learning System 72
5 Towards a New Knowledge Elicitation Algorithm Esma Aïmeur, Gilles Brassard and Sébastien Gambs Université de Montréal, Département IRO C.P. 6128, Succursale CentreVille, Montréal (Québec), H3C 3J7 Canada Abstract The task of transferring knowledge from a human to a computer has always been challenging, and even more so when the knowledge comes from multiple sources. It has been the role of knowledge acquisition tools to help in solving this problem. In this paper, we introduce a novel elicitation algorithm for knowledge base construction. An experiment is currently under way to evaluate how natural this algorithm feels to humans and how well it reflects their way of reasoning. 1 Introduction This paper addresses the problem of finding algorithms that can help structure knowledge acquisition. Our main motivation originates with our need to elicit the curriculum of QUANTI [Aïmeur et al., 2001a; 2001b; 2002], an Intelligent Tutoring System currently under construction to teach Quantum Information Processing (QIP) [Chuang and Nielsen, 2000]. The challenge comes from the fact that QIP is multidisciplinary (physics, computer science, mathematics and chemistry) and that we need to elicit knowledge from multiple experts in each of these fields. Consider an expert in a particular field. The goal is to find a way to assist him in the task of expressing his knowledge explicitly. The process should be as natural and intuitive as possible for the expert. The knowledge representation considered in this paper is a semantic network,agraphin which the nodes represent pieces of knowledge and the edges between nodes symbolize relationships between them. Examples of relationships are association, composition, equivalence, exclusion, etc. Semantic networks are good candidates for applications in knowledge engineering and knowledge acquisition. We give in the Appendix an example of the firstlevel semantic network that corresponds to the concept of Quantum Teleportation [Bennett et al., 1993] in the context of Quantum Information Processing. Semantic networks are the ancestors of more elaborate representations such as conceptual graphs developed by Sowa [Sowa, 1984; 2000]. By as natural and intuitive as possible, we mean an algorithm for graph creation that matches most closely the way the expert organizes knowledge. We should keep in mind that an expert in a domain is rarely also a knowledge engineer; thus having the knowledge does not necessarily imply the ability of transferring it. An elicitation algorithm close to the mental behaviour of the expert should make the task of transferring knowledge easier and more efficient, both in terms of time and accuracy. An overview of the different knowledge acquisition methods is given in Section 2. Computer scientists are often acquainted with algorithms for graph exploration, such as the depthfirst and breadthfirst searches, but not with their counterparts for the elicitation of a graph. These variants of depthfirst and breadthfirst search are described in Section 3. The main contribution of this paper is to propose in Section 4 a new graph elicitation algorithm called HAKE, which is a hybrid between the depthfirst and breadthfirst approaches. Note once more that the task here is not to explore a graph that already exists but rather to assist the expert in creating one. For instance, an elicitation algorithm could suggest to the expert the next node from which new links could be defined. Whichever elicitation algorithm is used, however, it is important to remember that its purpose is to guide an expert, not to constrain him. Therefore, it must always be possible for experts to create, modify or delete nodes and edges at any time and in any order they wish. Given several elicitation algorithms, deciding which one suits the expert best is highly subjective. For this reason, an experiment is currently under way in order to compare the benefits of HAKE with other elicitation algorithms based on depthfirst, breadthfirst. This experiment is briefly described in Section 5. Finally, we conclude with Section 6. 2 Knowledge Acquisition Methods Knowledge acquisition is a subfield of artificial intelligence concerned with the development of methods, software and tools for building knowledge bases. In computer science, it has been used for a long time to construct the knowledge base of expert systems. There are three main categories of knowledge acquisition methods, namely direct elicitation techniques, machineaided techniques and machine learning techniques. The purpose of direct elicitation techniques is to discover what knowledge the expert uses and the methods he employs for problem solving within a particular domain [Boose, 1989; Cooke, 1994]. Whereas direct elicitation techniques require interaction between the expert and
6 the knowledge engineer, with machineaided techniques the expert explains his knowledge directly to the computer in a way that is semiautomatic and interactive. In particular, knowledge acquisition tools use a conceptual model to interact with the user, thus hiding the complexity and the unfamiliarity of the symbolic model upon which the knowledge base is constructed [Clark et al., 2001; Gaines and Shaw, 1993; Kim and Gil, 2002; Schreiber et al., 1999; Schreiber, 2001]. The elicitation techniques considered in this paper belong to this category, and their main purpose is to assist in the creation of a knowledge base to serve as curriculum in Intelligent Tutoring Systems. Finally, the goal of machine learning knowledge acquisition techniques is to automate a significant part of the knowledge acquisition process [De Jong et al., 1993; Gaines, 1996; Krishnan et al., 1999; Tecuci and Kodratoff, 1995; Webb et al., 1999; Wetjers and Paredis, 2002; Zhou and Chen, 2002]. Aside from these three categories, software has been developed, such as metatools, to generate on demand knowledge acquisition tools adapted to a particular domain [Gennari et al., 2003]. 3 DepthFirst and BreadthFirst Elicitation Algorithms The depthfirst and breadthfirst approaches to knowledge elicitation are directly inspired from their wellknown cousins used in graph searching. However, the task of interest here is not to explore an existing graph but rather to assist the expert in creating a graph. The algorithm used affects the order in which the nodes and edges are created. In general, knowledge elicitation techniques could be used to create new nodes (objects) in addition to defining edges (relations) between nodes. For the sake of simplicity, however, we assume in the formal descriptions below, as well as in the experimentation under way, that the set of objects (the dictionary) has already been defined. We use the elicitation process to assist the user in creating edges between nodes, as well as in defining the type of nodes and edges. 3.1 Depthfirst approach With the depthfirst approach, the expert starts creating the graph from the root node and creates an edge between the root and a child node; then, the focus moves directly to this child node without asking the expert first to enter all the children of the root node. The elicitation process continues in the same manner: each time an edge is created between a node and its child, the elicitation continues directly with the child node. When the elicitation reaches a dead end (i.e. a leaf), the algorithm uses backtracking (implicit in recursivity) in order to go back to the parent node and continue the elicitation. Numbers on the edges of Fig. 1 illustrate the order in which an expert might use a depthfirst approach to elicit a concept from ecology in which we are interested in the food chain: there is an edge from one edge to another if the former is eaten by the latter. A human creating a graph in this manner can be thought of as someone who keeps following the trail of his ideas. He starts by thinking of an object A, and then he imagines an association between this object A and some other object B. Now, he has to figure out an association between this object B and yet another object. This process is repeated until he is no longer able to link the object he is currently thinking of with another object. When this point is reached, the expert goes back to the previous object, which corresponds to backtracking, and he continues the process. The advantage of this technique is that the expert can follow an idea until the end without any delay: he does not have to wait until he has thought of all the children of a node before continuing the elicitation with the next node. Also, there is no risk for the expert of getting lost because there is never a conceptual jump (see Section 3.2) between two consecutive nodes in the elicitation process: the distance in terms of the number of edges between two consecutive nodes is always 1. The main drawback of the depthfirst approach is that, by the time the expert returns to the root after he has elicited all the subgraph of the first child, he may have lost track of the list of children he might have had in mind for the root at the start of the process. More formally, the depthfirst approach to graph elicitation is given below in pseudocode. Here, we say that a node has been visited if the expert has considered it at least once by creating a link to it. (The root is also considered visited at the outset even if no other node points to it.) Process depth initialization(root) for each word in Dictionary do mark word has being not visited ask expert to choose the type of node root mark root has having been visited call depth elicitation(root) Process depth elicitation(node) repeat ask expert to select from Dictionary an entry succ for node to point at ask expert to choose the type of edge from node to succ if succ has not been visited then ask expert to choose the type of node succ mark succ has having been visited call depth elicitation(succ) until expert does not wish to select another successor for node 3.2 Breadthfirst approach According to the breadthfirst approach, which is illustrated in Fig. 2 with the same example, the expert starts from the root node and specifies all of its children by creating the corresponding edges. The root is considered to be at level 0 and its children at level 1. Next, the expert considers the nodes at level 1 one by one, and he creates the edges between them and their children; these new nodes are at level 2. The process continues in the same manner, level by level, until the entire graph has been elicited. The main advantage of the breadthfirst approach is that it allows the expert to focus on a particular concept and to give all the relations between this concept and other concepts before going on. Therefore, the expert will not have to come 2
7 Figure 1: Illustration of the depthfirst algorithm back to this concept later, so that he will need to concentrate on it only once. During the elicitation process, a conceptual jump occurs when the distance between two successively elicited nodes is large. The larger is the distance between these nodes, the greater is the risk that the expert might be confused or delayed. He may not understand the relationship between the two nodes, nor why they follow each other in the elicitation process. One of the drawbacks of the breadthfirst approach is that if the depth of the graph is high, there will sometimes be conceptual jumps between two consecutively elicited concepts. The deeper the graph, the larger and the more numerous the conceptual jumps. It may become difficult for the expert to keep track of the grand picture. Process breadth elicitation(root) for each word in Dictionary do sf mark word has being not visited initialize queue to an empty queue ask expert to choose the type of node root mark root has having been visited add root to queue repeat let node be the first element of queue remove node from queue repeat ask expert to select from Dictionary an entry succ for node to point at if succ has not been visited then ask expert to choose the type of node succ mark succ has having been visited add succ to queue ask expert to choose the type of edge from node to succ until expert does not wish to select another successor for node until queue is empty Figure 2: Illustration of the breadthfirst algorithm 4 The HAKE Elicitation Algorithm Our novel elicitation algorithm is a hybrid between the depthfirst and breadthfirst approaches hence its name HAKE: Hybrid Algorithm for Knowledge Elicitation. Our purpose in designing HAKE was to capitalize on the strengths and avoid the weaknesses of the earlier approaches. As already mentioned, its purpose is not to explore a graph but to guide the expert in the creation of the graph. It begins as with breadthfirst elicitation: the expert starts from the root node and specifies all of its children by creating the corresponding edges. But then, the focus is set on the first child, as with depthfirst elicitation, and the expert is asked to proceed recursively and elicit the entire subgraph accessible from that child, beginning with the elicitation of all of its children. When the process reaches a dead end because the first subgraph is complete, we backtrack to reset the focus on the second child of the root node, and we elicit the entire second subgraph before backtracking again to the next child of the root. This process is repeated until the focus has been set successively on each child of the root and the entire graph has been elicited. In addition to the earlier notion of a node having been visited, we need in the pseudocode below to say that a node has been elicited when all the relations going from this node to other nodes have been defined. Again, the foodchain example is used to illustrate HAKE in Fig. 3. Process HAKE initialization(root) for each word in Dictionary do mark word has being not visited and not elicited ask expert to choose the type of node root mark root has having been visited call HAKE elicitation(root) Process HAKE elicitation(node) initialize queue to an empty queue breadthfirst selection of all the immediate successors for node repeat ask expert to select from Dictionary an entry succ for node to point at 3
8 if succ has not been visited then ask expert to choose the type of node succ mark succ has having been visited if succ has not been elicited then add succ to queue ask expert to choose the type of edge from node to succ until expert does not wish to select another successor for node mark node has having been elicited depthfirst elicitation of those successors while queue is not empty do let succ be the first element of queue remove succ from queue if succ has not been elicited then recursively call HAKE elicitation(succ) Figure 3: Illustration of HAKE 5 Evaluation of HAKE In the preceding sections, we have explained why we believe that classical depthfirst and breadthfirst approaches to knowledge elicitation are not appropriate. To remedy the perceived shortcomings, we have presented HAKE, our novel knowledge elicitation algorithm. However, it is not possible to give a mathematical proof that HAKE is better than its competitors. Whereas graph exploration algorithms can easily be compared in terms of time, space, and the quality of the resulting solution, elicitation algorithms cannot be compared in such a straightforward manner. The comparison must be done empirically, by testing the algorithms directly with humans. Hence, we designed a preliminary experiment in the fall of 2002, and we are currently running a more elaborate evaluation procedure, whose purpose it to determine whether or not our intuition that HAKE would outperform more classical approaches is justified. In this section, we briefly report on both experiments. As explained in the Introduction, the point of elicitation algorithms is to guide experts in the creation of a conceptual graph. It is definitely not meant to constrain them. At any time during an elicitation process, regardless of which guiding approach is taken, it should be easy for experts to reject suggestions from the system and to continue elicitation in any order they please. In other words, it must always be possible for an expert to decide on new nodes and links to be created or deleted, regardless of any computer logic. For instance, in the case of a breadthfirst elicitation, we should allow an expert to come back to a node previously visited if a new idea comes up, even though this would not happen in a pure breadthfirst approach. Nevertheless, this flexibility was not allowed during our experimentations because our purpose is to compare the raw algorithms. 5.1 A preliminary experiment The first experiment was carried out with 45 volunteers using graphical software written in Java that had been specially designed for this purpose. The evaluation consisted of three different parts: free elicitation of a graph given on paper, guided elicitation of the same graph using all three algorithms in turn, and free elicitation of another graph that was not given explicitly. A posttest concluded the experiment. The graph elicited four times in the first two parts was our nowfamiliar food chain graph shown in Figs. 1, 2 and 3 (without numbers on the edges of course); it contains 11 nodes and 20 edges. This example was chosen because it is a topic about which everyone has some knowledge. The graph elicited in the third part contained 20 nodes and 21 edges. All the actions of the user were monitored and recorded in a log file. An assistant was always available during the experiment. The purpose free elicitation was to gather data on the elicitation order that came naturally to humans. This order was then compared to the order that would have been produced according to all three elicitation approaches. The purpose of guided elicitation was to compare the ease with which humans could use them, both in terms of objective criteria such as speed and number of errors, and subjective criteria such as their perception of how natural the processes seems to be. The results that we obtained in this first experiment were encouraging because HAKE outperformed the breadthfirst and depthfirst approaches on all our objective criteria. In all cases, the depthfirst approach was left far behind and breadthfirst was a close second. On the other hand, when we consider the subjective criteria provided by the users, we found that breadthfirst came out on top: people generally considered this algorithm to be most natural, easiest to use and requiring less concentration. This time, it is HAKE that came out as a close secondexcept for naturalnessand again depthfirst was left far behind. To summarize, HAKE and breadthfirst are very close according to every criterion in our study; neither appears to outclass the other in a way that would be indisputable. Unfortunately, this preliminary study suffered from several shortcomings. This prompted us to proceed to another, more thorough experimentation, which is described in the next subsection. In particular, we needed to improve on the following aspects of our methodology. True elicitation processes cannot take place when users are explicitly provided with the graph to be elicited, as we did in the first two parts of our preliminary experiment. How relevant was it to test humans on what 4
9 really amounted to graph exploration using the three algorithms? It is true users had to elicit a graph that was not given to them explicitly in the last part of our evaluation, but regrettably that graph was too close to being a tree, which again misses important aspects of reallife elicitation. Moreover, the metric used to compare the order produced by free elicitation to that would have been produced with guided elicitation was dubious. A more serious flaw in our preliminary experimentation was that we presented the users with the three elicitation algorithms in a fixed order, which certainly biased the study. Finally, a population of 45 volunteers is not statistically sufficient, and the fact that our graphs contained about 20 edges was too small to make a convincing case. 5.2 The current experiment Encouraged by the results of the first evaluation, yet fully aware of its shortcomings, we are currently running an improved experiment. All the inadequacies listed in the last paragraph of the previous subsection have been fixed. In particular, we have already gathered data from more than one hundred volunteers who have spent about one hour each in the process, they were asked to elicit familiar concepts for which they were not supplied with an explicit graph, they were given the opportunity to elicit graphs (not trees) containing as many as 80 edges, they were asked in a random order to try out all three elicitation algorithms, and the metric that we shall use to compare the result free and guided elicitation will be more appropriate. Obviously, it is too early to report on the results of this experiment, since it is currently under way. 6 Conclusions We have designed a novel knowledge elicitation algorithm called HAKE and argued that it is a good compromise between the more classical depthfirst and breadthfirst approaches because it combines advantages and avoids inconveniences from each. As with breadthfirst, it encourages the expert to define at once all the relations between a node and its children, hence the expert needs to concentrate on each node only once. As with depthfirst, on the other hand, the expert is given the opportunity to follow the trail of his ideas without being unduly sidetracked of more classical approaches such as breadthfirst and depthfirst. A preliminary evaluation of our algorithm was sufficiently encouraging to prompt us to undertake a more thorough experiment, which is currently under way. Nevertheless, we realize that other elicitation algorithms are possible, and that it is likely that the best approach is not among the three that we have considered in this paper. Certainly, we do not claim here that HAKE is the ultimate medicine! In fact, it is likely that there is no universally best choice that would benefit every expert when it comes to building a real knowledge base. If we cannot determine that one approach is best, an alternative would be to administer a pretest to each new expert in order to custom fit the elicitation algorithm to his mental process. For this purpose, we would need to observe the expert during free elicitation on several examples. Subsequently, unsupervised learning techniques would be used on the collected data in order to construct an ad hoc algorithm that would best fit the observed behaviour. This adaptive approach, which would certainly be far from trivial, falls outside the scope of this paper; it is left for further studies. A largescale experimentation in which several approaches would be used by experts to elicit a complete semantic network for instance the knowledge base that will be at the heart of the aforementioned Intelligent Tutoring System QUANTI [Aïmeur et al., 2001a; 2001b; 2002] would provide a more convincing test case. References [Aïmeur et al., 2001a] Aïmeur, E., Blanchard, E., Brassard, G., Fusade, B. and Gambs, S., Designing a Multidisciplinary Curriculum for Quantum Information Processing, Proceedings of AIED 01: Artificial Intelligence in Education, pp , 2001a. [Aïmeur et al., 2001b] Aïmeur, E., Blanchard, E., Brassard, G. and Gambs, S., QUANTI: A Multidisciplinary KnowledgeBased System for Quantum Information Processing, Proceedings of CALIE 01: International Conference on Computer Aided Learning in Engineering Education, pp , 2001b. [Aïmeur et al., 2002] Aïmeur, E., Brassard, G., Dufort, H. and Gambs, S., CLARISSE: A Machine Learning Tool to Initialize Student Models, Proceedings of ITS 02: Intelligent Tutoring Systems, pp , [Bennett et al., 1993] Bennett, C. H., Brassard, G., Crépeau, C.,Jozsa,R.,Peres,A.andWootters,W.K., Teleporting an Unknown Quantum State Via Dual Classical and EinsteinPodolskyRosen Channels, Physical Review Letters 70(13), pp , [Boose, 1989] Boose, J. H., A Survey of Knowledge Acquisition Techniques and Tools, Knowledge Acquisition 1(1), pp. 3 38, [Chuang and Nielsen, 2000] Chuang, I. L. and Nielsen, M., Quantum Computation and Quantum Information, Cambridge University Press, [Clark et al., 2001] Clark, P., Thompson, J., Barker, K., Porter, B., Chaudhri, V., Rodriguez, A., Thomere, J., Mishra, S., Gil, Y., Hayes, P. and Reichherzer, T., Knowledge Entry as the Graphical Assembly of Components, Proceedings of KCAP: International Conference on Knowledge Capture, [Cooke, 1994] Cooke, N. J., Varieties of Knowledge Elicitation Techniques, International Journal of Human Computer Studies 41, pp , [De Jong et al., 1993] De Jong, K. A., Spears, W. M. and Gordon, D. F., Using Genetic Algorithms for Concept Learning, Machine Learning 13, pp , [Gaines, 1996] Gaines, B. R., Transforming Rules and Trees into Comprehensible Knowledge Structures, in Advances in Knowledge Discovery and Data Mining, U.M. Fayad, G. PiatetskyShapiro, P. Smyth and R. Uthurusamy (editors), MIT Press, pp ,
10 [Gaines and Shaw, 1993] Gaines,B.R.andShaw,M.L.G., Supporting the Creativity Cycle Through Visual Languages, Proceedings of AAAI Spring Symposium: AI and Creativity, AAAI, pp , [Gennari et al., 2003] Gennari, G., Musen. M., Fergerson, R., Grosso, W., Crubézy, M., Eriksson, H., Noy, N. and Tu, S., The Evolution of Protg: An Environment for Knowledgebased Systems Development, International Journal of HumanComputer Studies 58, pp , [Kim and Gil, 2002] Kim, J. and Gil, Y., Deriving Acquisition Principles from Tutoring Principles, Proceedings of ITS 02: Intelligent Tutoring Systems, pp , [Krishnan et al., 1999] Krishnan, R., Sivakumar, G. and Bhattacharya, P., Extracting Decision Trees From Trained Neural Networks, Pattern Recognition 32(12), pp , [Schreiber, 2001] Schreiber, G., CommonKADS, Engineering and Managing Knowledge, The CommonKADS Website, [Schreiber et al., 1999] Schreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., Van de Velde, W. and Wielinga, B., Knowledge Engineering and Management: The CommonKADS Methodology, MIT Press, [Sowa, 1984] Sowa, J., Conceptual Structures: Information Processing in Mind and Machine, AddisonWesley Publishing Company, [Sowa, 2000] Sowa, J., Knowledge Representations: Logical, Philosophical and Computational Foundations, Brooks Cole Publishing Co., [Tecuci and Kodratoff, 1995] Tecuci, G. and Kodratoff, Y., Machine Learning and Knowledge Acquisition: Integrated Approaches, Academic Press, [Webb et al., 1999] Webb, G., Wells, J. and Zheng, Z., An Experimental Evaluation of Integrating Machine Learning with Knowledge Acquisition, Machine Learning 35(1), pp. 5 23, [Wetjers and Paredis,2002] Weitjers, T. and Paredis, J., Genetic Rule Induction at an Intermediate Level, KnowledgeBased Systems 15(1), pp , [Zhou and Chen, 2002] Zhou, Z.H. and Chen, Z.Q., Hybrid Decision Tree, KnowledgeBased Systems 15(8), pp ,
11 Figure 4: Semantic network for the concept of Quantum Teleportation
12 Living Book Deduction, Slicing, Interaction. Peter Baumgartner, Ulrich Furbach, Margret GrossHardt, Alex Sinner University of Koblenz, Department of Computer Science Universitätsstr. 1, Koblenz {peter, uli, margret, Abstract The Living Book is a system for the management of personalized and scenario specific teaching material. The main goal of the system is to support the active, explorative and selfdetermined learning in lectures, tutorials and self study. The Living Book includes a course on logic for computer scientists with a uniform access to various tools like theorem provers and an interactive tableau editor. It is routinely used within teaching undergraduate courses at our university. This paper focuses on the use of theorem proving technology within Living Book, viz., the knowledge management system (KMS). The KMS provides a scenario management component where teachers may describe those parts of given documents that are relevant in order to achieve a certain learning goal. The task of the KMS is to assemble new documents from a database of elementary units called slices (definitions, theorems, and so on) in a scenariobased way (like I want to prepare for an exam and need to learn about resolution ). 1 Overview This system description is about a realworld application of automated deduction. The system that we describe The Living Book is a tool for the management of personalized teaching material. The main goal of the Living Book system is to support the active, explorative and selfdetermined learning in lectures, tutorials and self study. It includes a course on logic for computer scientists with a uniform access to various tools like theorem provers and an interactive tableau editor 1. This course is routinely used within teaching undergraduate courses at our university. This system integrates a knowledge management system (KMS) that uses theorem proving technology as a core component. The task of the KMS is to assemble documents from a database of elementary units called slices (definitions, theorems, and so on) in a taskoriented way (like I want to prepare for an exam and need to learn about resolution ). We ar 1 This work is sponsored by EU ISTGrant TRIALSOLUTION and BmbfGrant In2Math. gue that such tasks can be naturally expressed through logic and that automated deduction technology can be exploited for solving it. In fact, we use firstorder logic with a default negation principle and we employ a model computation theorem prover for the reasoning tasks in the KMS. The input of the theorem prover consists of meta data that describe the dependencies between different slices, and logicprogramming style rules that describe the taskspecific composition of slices. Additionally, a user model is taken into account that contains information about topics and slices that are known or unknown to a student. A model computed by the system for such input then directly specifies the document to be assembled. Extension of the reader software. We will briefly describe the technological nondeductive framework that Living Book is embedded in, and we will indicate the new features made possible by using deduction. It is the Slicing Information Technology (SIT) [5] for the management of personalized documents. Its kernel has been developed within the ILFsystem in the German focus program Deduction, which was carried out in the nineties. The Slicing Book technology handles documents or textbooks that are split into small semantic units, so called slices or units, whichmaybee.g. a paragraph, a definition, or a problem in the original documents. Additional meta data play an important rôle to describe e.g. dependencies among slices, possible between slices from different documents. Also, keywords can be assigned to slices to indicate what the contents of a slice is about. The process of slicing and keyword annotation is partially automated, but usually needs some further manual work 2. The Living Book is embedded in a software called the SIT Reader, which allows HTMLbased access to slices stored on the server with a standard webbrowser (cf. the screenshot in Figure 1). To use the system, a user can mark units, like analysis/3/1/15 and analysis/3/1/16 representing e.g. theorem in the analysis book together with its proof. Then she can tell the system that she wants to read the marked unit and gets a corresponding PDF document. If she thinks that this information is not sufficient for her understanding, she can tell the system to include all units which are prerequisites of the units selected. 2 An experienced person needs about two weeks per book.
13 2 The Logic Behind On a higher, research methodological level the deduction technique used in the KMS is intended as a bridgingthegap attempt. On one side, our approach builds on results from the area of logicbased knowledge representation and logic programming concerning the semantics of (disjunctive) logic programs (see [2] for an overview). On the other side, our KRHYPER system used in Living Book is built on calculi and techniques developed for classical firstorder classical reasoning, like hyper tableaux [1] and term indexing. To formalize our application domain we found features of both mentioned areas mandatory: the logic should be a firstorder logic, it should support a default negation principle, and it should be disjunctive. We will motivate these features now. However, the Living Book goes beyond what is possible with the SITReader alone. For instance, a user may select a certain chapter, say e.g. chapter 3 containing everything about integrals in the analysis book. But instead of requesting all units from this chapter the user wants the system to take into account that she knows e.g. unit 3.1 already, and she possibly wants just the material that is important to prepare for an exam. Based on the units with their meta data the deduction system can exploit this knowledge and combine the LATEX based units to a new document (hopefully) fitting the needs of the user. In conclusion, we not only have the text of the books, we have an entire knowledge base about the material, which can be used by the reader in order to generate personalized documents from the given books. Knowledge Management System. From the viewpoint of deduction, the most interesting component of Living Book is the Knowledge Management System (KMS) with its deduction system KRHYPER. Figure 1 depicts the overall system architecture containing the KMS. As mentioned there, the KMS handles meta data of various types, which are Types of units ( Definition, Theorem, etc), Keywords describing what the units are about ( Integral, etc), References between units (e.g. a Theorem unit about Integral refers to a Definition unit), and what units are Required by other units in order to make sense (e.g. a unit could say solve exercise soandso under the assumption... and this unit would (textually) require the mentioned exercise unit.) Further, a User Profile stores what is known and what is unknown to the user. It may heavily influence the computation of the assembly of the final document. The user profile is built from explicit declarations given by the user about units and/or topics that are known/unknown to him. This information is completed by deduction to figure out what other units must also be known/unknown according to the initial profile. The overall system was evaluated in field studies at two German universities within undergraduate math education, and the KMSbased assembly of documents was received very positively by students. FirstOrder Specifications. In the field of knowledge representation, and in particular when nonmonotonic reasoning is of interest, it is common practice to identify a clause with the set of its ground instances. Reasoning mechanisms often suppose that these sets are finite, so that essentially propositional logic results. Such a restriction should not be made in our case. Consider the following clauses, which are actual program code in the KMS about user modeling 3. unknown unit(analysis/1/2/1). (1) known unit(analysis/1/2/ ALL ). (2) refers(analysis/1/2/3, analysis/1/0/4).(3) known unit(book B/Unit B) : (4) known unit(book A/Unit A), refers(book A/Unit A, Book B/Unit B). The fact (1) states that the unit named analysis/1/2/1 is unknown ; in fact (2), the _ALL_ symbol stands for an anonymous, universally quantified variable. Due to the / function symbol (and probably others) the HerbrandBase is infinite. Certainly it is sufficient to take the set of ground instances of these facts up to a certain depth imposed by the books. However, having thus exponentially many facts, this option seems not really a viable one. The rule (4) expresses how to derive the knownstatus of a unit from a knownstatus derived so far and using a refersrelation among units. Default Negation. Consider the program code below, which is also about user modeling. The facts (1), (2) and (3) have been described above. It is the purpose of rule (5) to compute the knownstatus of a unit on a higher level, based on the known units andunknown units. The relation called unknown unit inferred, which is computed by rule (6) is the one exported by the usermodel computation to the rest of the program. 3 We use Prolog notation. 9
14 Books Sliced Units Knowledge Management System Overview: "Integral" Analysis Wolter/Dahn Mathematik Luderer Logik Furbach Slicing Metadata Annotation Types Keywords References RequiredUnits Ontology Control Unit User Data Deduction System Logic Programs Internalized Metadata CGI TCP/IP Metadata Database Figure 1: System Architecture %% Actual user knowledge: known unit(analysis/1/2/ ALL ). (1) unknown unit(analysis/1/2/1). (2) refers(analysis/1/2/3, analysis/1/0/4). (3) %% Program rules: known unit inferred(book/unit) : (5) known unit(book/unit), not unknown unit(book/unit). unknown unit inferred(book/unit) : (6) not known unit inferred(book/unit). Now, facts (1) and (2) together seem to indicate inconsistent information, as the unit analysis/1/2/1 is both a known unit and a unknown unit. The rule (5), however, resolves this apparent inconsistency. The pragmatically justified intuition behind is to be cautious in such cases: when in doubt, a unit shall belong to the unknown unit inferred relation. Also, if nothing has been said explicitly if a unit is a known unit or an unknown unit, it shall belong to the unknown unit inferred relation as well. Exactly this is achieved by using a default negation operation not, when used as written, and when equipping it with a suitable semantics 4. Disjunctions and Integrity Constraints. Consider the clause set shown below. It states that if there is more than one definition unit of some Keyword, then (at least) one of them must be a computed unit, one that will be included in 4 Observe that with a classical interpretation of not, counterintuitive models exist. Each model computed by our system is a possible model according to [4]. the generated document (the symbol ; means or ). Beyond having proper disjunctions in the head, it is also possible to have rules without a head, which act as integrity constraints. computed unit(book1/unit1) ; computed unit(book2/unit2) : definition(book1/unit1,keyword), definition(book2/unit2,keyword), not equal(book1/unit1, Book2/Unit2). 3 KRHYPER The logic was chosen carefully: there is a balance between expressivity and the possibility to build an efficient implementation for model computation. It is wellknown that for (propositional) stratified normal programs the intended model can be computed in polynomial time, which does not hold e.g. for stable models. For our application, we had no problems to restrict to stratified programs. Of course, disjunctive programs are intractable in general, even without default negation. Now, the calculus derives from the clauses in our example in three steps the following hyper tableau. known unit(analysis/1/2/ ALL ) unknown unit(analysis/1/2/1) refers(analysis/1/2/3, analysis/1/0/4) known unit(analysis/1/0/4) known unit inferred(analysis/1/2/ ALL )  { known unit inferred(analysis/1/2/1) } 10
15 The topmost three lines stem from the clauses (1), (2) and (3), respectively. By combining clauses (2), (3) and (4) the calculus infers as displayed in the fourth line that the unit analysis/1/0/4 should be known. The concluding line in the derivation is obtained by clauses (1), (2) and (5). It says that in the known_unit_inferredrelation are all subunits of analysis/1/2 more technically: all ground instances of analysis/1/2/_all_ except for the unit analysis/1/2/1. Being able to represent interpretations that way enables us to not ground the whole program in a preprocessing step, as commonly assumed in related approaches, but to carry out inferences directly at the firstorder level. 4 Related Work Interactive and personalized Elearning systems have been discussed in the literature. In [7] an interactive electronic book (ibook) ispresented. This ibook is devoted to teaching adaptive and neural systems for undergraduates in electrical engineering. The salient feature of this book is the tight integration of simulators demonstrating the various topics in adaptive systems and the incremental use of simulation during each chapter in order to develop successively on a certain subject. The ibook, though, does not cope with different learning scenarios or user profiles and offers the same documents for every student. The paper [3] discusses perspectives for electronic books and emphasizes the need for personalized and user specific content. This article concentrates on personalized presentation of content, for instance by means of style sheet application to the content that is delivered to the user. Personalization applied to the content of the material as done in our approach is not considered. Based on an explicit representation of the structure of the concepts in the domain of interest and a user model [8] and [6] dynamically generates instructional courses. These approaches use planning techniques in order to determine the relevant materials on a per user basis. The user model in [8] describes the student s knowledge, and contains history information about previous sessions as well as personal traits and preferences. Interactivity is not integrated in the works described by [8]. In[6] an interactive and adaptive system is presented. Scenarios and user profiles are supported. Here, user profile distinguishes between knowledge, comprehension and application in order to reflect the different status of knowledge during learning. These two approaches differ from Living Book in two main aspect: firstly, in Living Book, we have chosen a deduction based approach instead of planning techniques, and secondly, the user profile adapts according to what the users specify that they know. For instance, the user indicates those units that are already known. From this the system deduces everything that should be known, too, based on dependence relationships between knowledge units. In [8; 6], the user model is adapted based on information the system gathers from a user during a session, e.g. if a certain exercise has been successfully solved. 5 Conclusions The elearning application as described here represents a nontrivial application for deduction techniques wrt. the size of the fact base. There are roughly facts per book and currently there are 12 books in the repository. The answer times vary between one second and one minute. Although the response times are sometimes still a bit too long, we think that deduction techniques are indeed feasible (we are currently reimplementing KRHYPER and the new implementation will be faster by at least an order of magnitude). Also, we think it becomes obvious from our approach that the techniques used may be applied to, say, document management applications in general, like e.g. the generation of problemspecific Unix manpages or the assembly of personalized electronic newspapers, too. References [1] Peter Baumgartner, Ulrich Furbach, and Ilkka Niemelä. Hyper Tableaux. In Proc. JELIA 96, number 1126 in Lecture Notes in Artificial Intelligence. European Workshop on Logic in AI, Springer, [2] Gerhard Brewka, Jrgen Dix, and Kurt Konolige. Nonmonotonic Reasoning, volume 73 of Lecture Notes. CSLI Publications, [3] François Bry and Michael Kraus. Perspectives for electronic books in the world wide web age. The Electronic Library Journal, 20(4): , [4] Edward P.F. Chan. A Possible World Semantics for Disjunctive Databases. IEEE Trans. on Knowledge and Data Engineering, 5(2): , [5] Ingo Dahn. Slicing book technology  providing online support for textbooks. In Helmut Hoyer, editor, Proc. of the 20th World Conference on Open and Distance Learning,Düsseldorf/Germany, [6] E. Melis, E. Andres, J. Büdenbender, A. Frischauf, G. Goguadze, P. Libbrecht, M. Pollet, and C. Ullrich. Activemath: A generic and adaptive webbased learning environment. Journal of Artificial Intelligence and Education, 12(4): , [7] Jose C. Principe, Neil R. Euliano, and W. Curt Lefebvre. Innovating adaptive and neural systems instruction with interactive electronic books. Proc. IEEE, 88(1):81 95, [8] J. Vassileva. Dynamic courseware generation at the www. In Proc. of the 8th World Conference on AI and Education (AIED 97), Kobe, Japan,
16 Tutorial Dialogs on Mathematical Proofs Christoph Benzmüller 1, Armin Fiedler 1, Malte Gabsdil 2, Helmut Horacek 1, Ivana KruijffKorbayová 2, Manfred Pinkal 2,Jörg Siekmann 1, Dimitra Tsovaltzi 2, Bao Quoc Vo 1, Magdalena Wolska 2 1 Fachrichtung Informatik 2 Fachrichtung Computerlinguistik Universität des Saarlandes, Postfach , D Saarbrücken, Germany Abstract The representation of knowledge for a mathematical proof assistant is generally used exclusively for the purpose of proving theorems. Aiming at a broader scope, we examine the use of mathematical knowledge in a mathematical tutoring system with flexible natural language dialog. Based on an analysis of a corpus of dialogs we collected with a simulated tutoring system for teaching proofs in naive set theory, we identify several interesting problems which lead to requirements for mathematical knowledge representation. This includes resolving reference between natural language expressions and mathematical formulas, determining the semantic role of mathematical formulas in context, and determining the contribution of inference steps specifiedbytheuser. 1 Introduction In a mathematical proof assistant (MPA), knowledge representation (if any) is used for the purpose of proving theorems. Stateoftheart MPAs such as COQ, NUPRL, MIZAR, ISABELLEHOL, PVS and ΩMEGA usually provide a combination of proof automation and facilities for user interaction and most of them are connected to a structured mathematical knowledge base. In spite of their common purpose (proving theorems), the heterogeneity of MPAs (they are based on different logics, calculi, semantics, representations of proofs, etc.) poses a challenge for the communication of mathematical knowledge between them, and most importantly, a common ontology and semantics are missing. Some of these issues are currently investigated This work is supported by the SFB 378 at Saarland University, Saarbrücken, and the EU training network CALCULEMUS (HPRNCT ) funded in the EU 5th framework. in the Mathematical Knowledge Management research initiative [4]. However, appropriate knowledge representation in MPAs to support the search for a proof is only one of the issues to be addressed in the future of computeraided mathematics, and in computeraided mathematical education in particular. Among the challenges involved in humanoriented automated proving is the coupling of MPAs with natural language processing. This in turn gives rise to additional requirements on knowledge representation. For example, it has been shown in [9] that the mathematical domain representation as used for proof search and proof planning is not sufficient for the purpose of proof presentation. Some methods for more natural references to rules have been demonstrated in [16]. In this paper, we present further requirements on mathematical knowledge representation for the purpose of handling flexible natural language dialog in a mathematical tutoring system. Our discussion is based on data we collected through experiments with a simulated tutoring dialog system for teaching proofs in naive set theory. Some stateoftheart tutorial systems allow limited dialog, where the input is either menubased or requires exact wording [24; 2; 13]. This contrasts with Moore s empirical findings showing that flexible natural language dialog is needed to support active learning [23]. The latter approach is taken for example in the CIRCSIMTutor project [22] which aims to build a natural languagebased tutoring system for firstyear medical students to learn about the reflex control of blood pressure. The goal of our project is to develop a mathematical tutoring system with flexible natural language dialog to support mathematical problem solving. We employ a modular approach keeping a strict separation between the different kinds of knowledge involved in the processing. The design of the system components is informed by the analysis of a corpus of tutorial dialog data we collected in an experiment. The outline of this paper is as follows. We first
17 present the aims of our project, illustrate the current application scenario and motivate the choice of the mathematical domain. The modeling of static and dynamic knowledge within this domain is our first contribution. Next, we describe an experiment in which we collected a corpus of natural language tutorial dialogs in the chosen mathematical domain. On the basis of the analysis of our corpus we then present the key requirements and challenges for the representation of mathematical knowledge and the design of a mathematical reasoning tool. 2 The DIALOG Project The goal of the DIALOG project 1 [25] is (i) to empirically investigate the use of flexible natural language dialog in tutoring mathematics, and (ii) to develop an experimental prototype system gradually embodying the empirical findings. The experimental system will engage in a dialog in written natural language (and later also in multimodal forms of communication based on diagrams, spoken language and animated mathematical function displays) to help a student understand and construct mathematical proofs. The overall scenario for the system is illustrated in Figure 1. We describe its components below. Learning Environment In our scenario, the student takes an interactive course in some field of mathematics within a webbased learning environment. We use ACTIVEMATH [19; 21], a generic webbased learning system that dynamically generates interactive (mathematical) courses adapted to the student s goals, preferences, capabilities, and knowledge. It enables a student to select the material he/she wants to study and to review his/her knowledge about the subject matter. After finishing a learning unit the student may opt for an interactive exercise session to actively apply what he/she has learned. It is primarily the interactive exercises that we aim to enrich with the possibility of flexible tutoring dialog using natural language. The features of ACTIVEMATH include: user modeling and monitoring facilities; useradapted content selection, sequencing, and presentation; support of active and exploratory learning by external tools; use of (mathematical) problem solving methods, and reusability of the encoded content as well as interoperability between systems. ACTIVEMATH maintains a dynamically updated student model (SM) containing information about the axioms, definitions, theorems (hence the assertions) and the proof techniques the student has studied and mastered so 1 The DIALOG project is part of the Collaborative Research Center on ResourceAdaptive Cognitive Processes (SFB 378) at University of the Saarland [26]. far. 2 This information will be used also by the tutoring dialog system. In addition, we also assume an idealized student model (ISM) set up by the author of the learning unit, which specifies the mathematical material a student ideally should know after studying the unit. Mathematical Proof Assistant The MPA is used for the problemsolving in the mathematical domain underlying the dialogs. This involves the verification (or falsification) of user specified inference steps and checking whether the application of an inference step leads to a proof state from which a complete proof can be obtained. Mathematical tutorial dialogs thus require (i) stepwise interactive as well as (ii) automated proof construction at a humanoriented level of abstraction. Ideally, these are provided by the MPA. In addition, it should be possible to control the proof strategy used by the MPA (depending on the target of the tutorial session), and the proof(s) constructed by the MPA should only exploit the mathematical knowledge that the student possesses, that is, it should be possible to control the mathematical knowledge used in the proof(s) in accordance to the respective SM and ISM. The ΩMEGA system [29] with its advanced proof presentation and proof planning facilities provides an adequate starting point for integrating an MPA in our scenario. Proof Manager In the course of the interactive tutorial session, the user may explore alternative proofs, or make various attempts at constructing a valid proof, involving both valid and invalid inference steps. In addition, tutoring may require the possibility to compare the problemsolving attempts made by the user with target master proofs. The student s problemsolving attempts with respect to the proof space need to be monitored for the sake of managing the dialog flow. It is the task of the proof manager in our scenario to provide this interface and additional bookkeeping between the MPA and the dialog manager. Dialog Manager When the student enters a tutorial dialog session, the interaction is handled by the dialog manager. We employ the Information State (IS) Update approach to dialog management developed in the TRINDI and SIRIDUS projects [28; 27]. The IS is a centrally maintained data structure which contains a representation of the information accumulated as the dialog progresses, including (i) 2 ACTIVEMATH keeps track of what material the student has studied and for how long [20]. It also lets the student skip material he is confident to know well already. 13
18 USER GENERATION ANALYSIS LINGUISTIC RESOURCES DIALOG MANAGER USER MODEL PEDAGOGICAL KNOWLEDGE DIALOG RESOURCES PROOF MANAGER LEARNING ENVIRONMENT MATHEMATICAL KNOWLEDGE (MBASE) MATHEMATICAL PROOF ASSISTANT ACTIVEMATH OMEGA Figure 1: DIALOG project scenario. private information of the system, and (ii) the information considered to be shared between the system and the user. A dialog is modeled as a sequence of dialog moves each of which is a transition from one information state to the next one. The system interprets each user s utterance with respect to the current IS, and then computes a transition to a new IS. When it is the system s turn, the next move is selected according to the IS at that point, the corresponding utterance is produced, and again the IS is updated. The dialog manager relies on the input analysis and output generation modules to exchange data between the user and the system; it further relies on the proofmanager to monitor the mathematical problemsolving and to access the MPA. Knowledge Resources The static knowledge in our scenario comprises linguistic resources, dialog resources, pedagogical knowledge, and mathematical knowledge. The dynamic knowledge includes the SM and ISM mentioned above, as well as the information state maintained by the dialog manager. The linguistic resources include the grammar and the lexicon used for analyzing the natural language input and generating the output. We combine the use of generic, domain independent resources with resources specific to the particular area of mathematics being taught. The static dialog resources include (i) dialog move selection rules (i.e., rules that determine what dialog move the system will make next, given the current information state and a communicative goal), and (ii) dialog informationstate update rules (i.e., rules that dynamically change the information state depending on the dialog moves the user or the system have successfully made). We distinguish between domain independent, generic dialog moves, such as metacommunication moves (used for, e.g., clarification and correction), and domainspecific ones, such as various kinds of hinting moves [11; 31], which may be further specialized for tutoring in the matematics domain. The pedagogical knowledge specifies generic and domainspecific teaching strategies. This includes the specification of the didactic versus socratic teaching methods. Also the hinting dialog moves mentioned above are derived from the pedagogical knowledge. Finally, the static mathematical knowledge consists of assertions (i.e., axioms, lemmata, theorems), domain dependent proof rules and methods, corresponding diagrammatic illustrations as 14
19 well as selected completed master proofs. This mathematical knowledge is typically highly structured into mathematical subdomains and it usually forms a dependency/inheritance graph. Examples of systems maintaining structured corpora of formalized mathematics are MIZAR with its mathematical library [5], NuPrl s knowledge base [3] and the MBase system [18], which is the system of choice in our project. An essential requirement in our scenario is that the mathematical knowledge is shared between the learning environment, the DIALOG system, and the mathematical assistant. One problem in many current proof systems is to guarantee consistent handling and data flow between the declarative and the procedural view of assertions. In [32], we suggest a solution that uses declarative entries in the mathematical knowledge base to automatically generate all potential procedural views from these declarative entries for each given proof context. We already mentioned that there may be a limited number of fixed master proofs for the proof exercises to be employed in guiding the tutorial session. These can be statically maintained in the mathematical knowledge base. Generally, however, there are infinitely many variants of proofs for a mathematical theorem and a significant number of these proofs is acceptable for being tutored relative to the knowledge and capabilities of the student. We therefore couple the static modeling of a well chosen set of master proofs with the dynamical verification of single inference steps and the dynamic generation of proofs by the MPA. The SM (and the ISM) refer to the mathematical knowledge base in the sense that they maintain, for each student, a view on this knowledge base, separating the known from the unknown content. An additional teacher model could provide information such as a specification of the dominant and the subdominant mathematical concepts of a learning unit. Note that the structure imposed by the latter information is likely to differ from the hierarchical structure of the knowledge base itself [30]. Our Current Domain of Choice: Naive Set Theory For the first phase of the project we chose naive set theory as the mathematical domain of interest. We integrated a course on naive set theory into ACTIVEMATH. Basic notions (e.g., set) and definitions (e.g., subset), or set operations, (e.g., union, intersection, set complement, power set) are structurally represented in this course. Typical examples are presented after each definition, for the student to get a good intuition about the more abstract concepts. Students are also exposed to Venn diagrams which provide an intuitive understanding of set operations. Throughout the course, the student is continuously introduced to the more important properties of this domain, for example, laws of commutativity, associativity, distributivity, or de Morgan laws. The Naive Set Theory domain has several advantages: (i) The problems in this domain are almost always automatically provable [6; 7]. (ii) The domain is not too complex for the intended users (i.e., first year students). (iii) Simple problems are typically even decidable, so that wrong proof steps can be detected by the generation of counterexamples with a model generator [6]. (iv) The domain provides interesting opportunities for multimodal interaction using the Venn and Spider diagrams. 3 (Sound and complete inference systems exist for the representation layer of Spider diagrams; cf. [14] and the references therein.) The disadvantages of this domain are: (i) Its modeling is built directly on predicate logic without higherlevel concepts and fields of mathematics on many intermediate layers between the base logic and the domain itself. Hence, there are no hierarchical dependencies on other mathematical subdomains, such as real numbers, continuous functions, Abelian groups, etc. (ii) Consequently, the hierarchical expansion depth of proof plans and proofs is also relatively low. Although this raised some initial doubts about the suitability the naive set theory domain, the experiment described in the next section revealed that even such a relatively simple mathematical domain has sufficient complexity to allow meaningful tutorial dialog sessions. We shall, however, also consider more complex mathematical domains in future experiments. 3 Empirical Study We conducted a WizardofOz (WOz) experiment in order to collect a corpus of tutorial dialogs in the naive set theory domain. We implemented a tool to support the experiment and collect the dialog data online [10]. In a WOz experiment, the subject interacts through an interface with a human wizard simulating the behavior of a system [8]. The WOz methodology is commonly used to investigate humancomputer interaction in systems under development. One of the reasons for using a WOz setting rather than a human tutor is that it has been observed that humans interact differently with computers than with other humans. Another reason is that the tutor should follow the specific algorithm(s), which we are implementing in our system. In this way the dialog data we collect (i) represents the users behavior in interactions following these algorithms and (ii) provides early feedback on the algorithms. In subsequent experiments 3 These aspects are, however, not subject of this paper and will be considered in later experiments. 15
20 Declarative View Procedural View Diagrammatic View A B e ¾ A e ¾ A B ¾ IL x A union B e A B e ¾ A µ e ¾ A Bµ A B C A B µ A B Cµ A B A Bµ µ A ¾ Bµµ A B A B C  IL A B x C B union C A B A ¾ Bµ I? Figure 2: Declarative, procedural, and diagrammatic knowledge in the domain of naive set theory. in the project, implemented components can substitute for some of the tasks now carried out by the wizard, while preserving the overall experimental setup. We invited 24 subjects to participate in the experiment. They were students with educational background in humanities (e.g., law, economy, various languages, psychology) or sciences (e.g., biology, chemistry, computer science, computational linguistics). Their prior mathematical knowledge ranged from little to fair. For each subject, the experiment consisted of the following phases (each of which had a fixed maximum duration): (1) Preparation and pretest: First, the subject filled in a background questionnaire. Then he/she studied written lesson material, explaining basic concepts and providing a collection of six lemmata about properties of sets and eleven lemmata about properties of powersets. 4. Finally he/she was asked to prove (on paper) the theorem K Aµ ¾ P K A Bµµ. (2) Tutoring session: The subject was asked to evaluate a tutoring system with natural language dialog capabilities. He/She was given three theorems to prove: The theorem K((A B) (C D)) = (K(A) K(B)) (K(C) K(D)) was used first to let the subject familiarize himself/herself with the system s interface. Then two more complex theorems were presented (in different order to different subjects): (a) A B ¾ P((A C) (B C)) (b) Wenn A K(B), dann B K(A). The interface enabled the subject to type text or insert mathematical symbols by clicking on buttons; it also displayed the complete dialog with both the tutor s and the subject s utterances. The 4 In the first experiment the lesson material was still presented on paper, not through the ACTIVEMATH system. subject was instructed to enter partial steps of a proof rather than the complete proof as a whole, in order to enable a dialog with the system. (3) Posttest and evaluation questionnaire: The subject was asked to write down (on paper) a proof for one more theorem. 5 To conclude the experiment, he/she was asked to fill in a questionnaire addressing various aspects of the system and its usability. The tutorwizard s task was to respond to the student s utterances following a given algorithm. The wizard first classified the completeness, accuracy, and relevance of the subject s utterance with respect to a valid proof of the theorem at hand. Then, the wizard decided what dialog moves to make next and verbalized them. Depending on the tutoring strategy employed by the wizard for a given subject, the dialog move options included informing the subject about completeness, accuracy, and relevance of the utterance, giving hints on how to proceed further, explaining a step under consideration, prompting for the next step, or entering into a clarification dialog. The wizard was free to mix text with formulas [11]. 4 A Preliminary Analysis of the Test Dialogs In this section, we examine the issues involved in the natural language analysis of the dialog utterances containing mathematical expressions and the role of mathematical domain knowledge. Examples of dialog utterances that illustrate the phenomena addressed by the analysis below are shown in Figure 3 (the original German versions of ut 5 The comparison of the student s performance of the pretest and post test proofs serve to evaluate their learning gain from the tutoring session. 16
Interactivity of Exercises in
Interactivity of Exercises in ActiveMath Giorgi Goguadze a, Alberto González Palomo b, Erica Melis c, a University of Saarland, Saarbrücken, Germany b German Research Institute for Artificial Intelligence
More informationAn Intelligent Sales Assistant for Configurable Products
An Intelligent Sales Assistant for Configurable Products Martin Molina Department of Artificial Intelligence, Technical University of Madrid Campus de Montegancedo s/n, 28660 Boadilla del Monte (Madrid),
More informationLearning Mathematics with
Deutsches Forschungszentrum für f r Künstliche K Intelligenz Learning Mathematics with Jörg Siekmann German Research Centre for Artificial Intelligence DFKI Universität des Saarlandes elearning: Systems
More informationSystem Description: The MathWeb Software Bus for Distributed Mathematical Reasoning
System Description: The MathWeb Software Bus for Distributed Mathematical Reasoning Jürgen Zimmer 1 and Michael Kohlhase 2 1 FB Informatik, Universität des Saarlandes jzimmer@mathweb.org 2 School of Computer
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationAutomated Theorem Proving  summary of lecture 1
Automated Theorem Proving  summary of lecture 1 1 Introduction Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement is a logical consequence of
More informationA Virtual Assistant for WebBased Training In Engineering Education
A Virtual Assistant for WebBased Training In Engineering Education Frédéric Geoffroy (1), Esma Aimeur (2), and Denis Gillet (1) (1) Swiss Federal Institute of Technology in Lausanne (EPFL) LAI2SSTI,
More informationA Framework for the Delivery of Personalized Adaptive Content
A Framework for the Delivery of Personalized Adaptive Content Colm Howlin CCKF Limited Dublin, Ireland colm.howlin@cckfit.com Danny Lynch CCKF Limited Dublin, Ireland colm.howlin@cckfit.com Abstract
More informationSo today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)
Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we
More informationInteractive OntologyBased User Modeling for Personalized Learning Content Management
Interactive OntologyBased User Modeling for Personalized Learning Content Management Ronald Denaux 1, Vania Dimitrova 2, Lora Aroyo 1 1 Computer Science Department, Eindhoven Univ. of Technology, The
More informationSupporting Active Database Learning and Training through Interactive Multimedia
Supporting Active Database Learning and Training through Interactive Multimedia Claus Pahl ++353 +1 700 5620 cpahl@computing.dcu.ie Ronan Barrett ++353 +1 700 8616 rbarrett@computing.dcu.ie Claire Kenny
More informationFive High Order Thinking Skills
Five High Order Introduction The high technology like computers and calculators has profoundly changed the world of mathematics education. It is not only what aspects of mathematics are essential for learning,
More informationStandardization of Components, Products and Processes with Data Mining
B. Agard and A. Kusiak, Standardization of Components, Products and Processes with Data Mining, International Conference on Production Research Americas 2004, Santiago, Chile, August 14, 2004. Standardization
More information1311. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10
1/10 1311 Adding New Level in KDD to Make the Web Usage Mining More Efficient Mohammad Ala a AL_Hamami PHD Student, Lecturer m_ah_1@yahoocom Soukaena Hassan Hashem PHD Student, Lecturer soukaena_hassan@yahoocom
More informationONLINE EXERCISE SYSTEM A WebBased Tool for Administration and Automatic Correction of Exercises
ONLINE EXERCISE SYSTEM A WebBased Tool for Administration and Automatic Correction of Exercises Daniel Baudisch, Manuel Gesell and Klaus Schneider Embedded Systems Group, University of Kaiserslautern,
More informationAN INTELLIGENT TUTORING SYSTEM FOR LEARNING DESIGN PATTERNS
AN INTELLIGENT TUTORING SYSTEM FOR LEARNING DESIGN PATTERNS ZORAN JEREMIĆ, VLADAN DEVEDŽIĆ, DRAGAN GAŠEVIĆ FON School of Business Administration, University of Belgrade Jove Ilića 154, POB 52, 11000 Belgrade,
More informationData Validation with OWL Integrity Constraints
Data Validation with OWL Integrity Constraints (Extended Abstract) Evren Sirin Clark & Parsia, LLC, Washington, DC, USA evren@clarkparsia.com Abstract. Data validation is an important part of data integration
More informationCHAPTER 7 GENERAL PROOF SYSTEMS
CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes
More informationExtending Semantic Resolution via Automated Model Building: applications
Extending Semantic Resolution via Automated Model Building: applications Ricardo Caferra Nicolas Peltier LIFIAIMAG L1F1AIMAG 46, Avenue Felix Viallet 46, Avenue Felix Viallei 38031 Grenoble Cedex FRANCE
More information2 AIMS: an Agentbased Intelligent Tool for Informational Support
Aroyo, L. & Dicheva, D. (2000). Domain and user knowledge in a webbased courseware engineering course, knowledgebased software engineering. In T. Hruska, M. Hashimoto (Eds.) Joint Conference knowledgebased
More informationA Patternbased Framework of Change Operators for Ontology Evolution
A Patternbased Framework of Change Operators for Ontology Evolution Muhammad Javed 1, Yalemisew M. Abgaz 2, Claus Pahl 3 Centre for Next Generation Localization (CNGL), School of Computing, Dublin City
More information8. KNOWLEDGE BASED SYSTEMS IN MANUFACTURING SIMULATION
 18. KNOWLEDGE BASED SYSTEMS IN MANUFACTURING SIMULATION 8.1 Introduction 8.1.1 Summary introduction The first part of this section gives a brief overview of some of the different uses of expert systems
More informationPersonalized elearning a Goal Oriented Approach
Proceedings of the 7th WSEAS International Conference on Distance Learning and Web Engineering, Beijing, China, September 1517, 2007 304 Personalized elearning a Goal Oriented Approach ZHIQI SHEN 1,
More informationC. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering
C. Wohlin and B. Regnell, "Achieving Industrial Relevance in Software Engineering Education", Proceedings Conference on Software Engineering Education & Training, pp. 1625, New Orleans, Lousiana, USA,
More informationIntegrating Benders decomposition within Constraint Programming
Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP
More informationComponent visualization methods for large legacy software in C/C++
Annales Mathematicae et Informaticae 44 (2015) pp. 23 33 http://ami.ektf.hu Component visualization methods for large legacy software in C/C++ Máté Cserép a, Dániel Krupp b a Eötvös Loránd University mcserep@caesar.elte.hu
More informationA Tool for Generating Partition Schedules of Multiprocessor Systems
A Tool for Generating Partition Schedules of Multiprocessor Systems HansJoachim Goltz and Norbert Pieth Fraunhofer FIRST, Berlin, Germany {hansjoachim.goltz,nobert.pieth}@first.fraunhofer.de Abstract.
More information(Refer Slide Time: 01:52)
Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture  2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This
More informationAnalysing the Behaviour of Students in Learning Management Systems with Respect to Learning Styles
Analysing the Behaviour of Students in Learning Management Systems with Respect to Learning Styles Sabine Graf and Kinshuk 1 Vienna University of Technology, Women's Postgraduate College for Internet Technologies,
More informationA Knowledgebased Product Derivation Process and some Ideas how to Integrate Product Development
A Knowledgebased Product Derivation Process and some Ideas how to Integrate Product Development (Position paper) Lothar Hotz and Andreas Günter HITeC c/o Fachbereich Informatik Universität Hamburg Hamburg,
More informationFacilitating Knowledge Intelligence Using ANTOM with a Case Study of Learning Religion
Facilitating Knowledge Intelligence Using ANTOM with a Case Study of Learning Religion Herbert Y.C. Lee 1, Kim Man Lui 1 and Eric Tsui 2 1 Marvel Digital Ltd., Hong Kong {Herbert.lee,kimman.lui}@marvel.com.hk
More informationTaskModel Driven Design of Adaptable Educational Hypermedia
TaskModel Driven Design of Adaptable Educational Hypermedia Huberta Kritzenberger, Michael Herczeg Institute for Multimedia and Interactive Systems University of Luebeck Seelandstr. 1a, D23569 Luebeck,
More informationCS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 RealTime Systems. CSCI 522 High Performance Computing
CS Master Level Courses and Areas The graduate courses offered may change over time, in response to new developments in computer science and the interests of faculty and students; the list of graduate
More informationBaseline Code Analysis Using McCabe IQ
White Paper Table of Contents What is Baseline Code Analysis?.....2 Importance of Baseline Code Analysis...2 The Objectives of Baseline Code Analysis...4 Best Practices for Baseline Code Analysis...4 Challenges
More informationUPDATES OF LOGIC PROGRAMS
Computing and Informatics, Vol. 20, 2001,????, V 2006Nov6 UPDATES OF LOGIC PROGRAMS Ján Šefránek Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University,
More informationONTOLOGY FOR MOBILE PHONE OPERATING SYSTEMS
ONTOLOGY FOR MOBILE PHONE OPERATING SYSTEMS Hasni Neji and Ridha Bouallegue Innov COM Lab, Higher School of Communications of Tunis, Sup Com University of Carthage, Tunis, Tunisia. Email: hasni.neji63@laposte.net;
More informationCHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION Exploration is a process of discovery. In the database exploration process, an analyst executes a sequence of transformations over a collection of data structures to discover useful
More informationThe StudentProject Allocation Problem
The StudentProject Allocation Problem David J. Abraham, Robert W. Irving, and David F. Manlove Department of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK Email: {dabraham,rwi,davidm}@dcs.gla.ac.uk.
More informationSelbo 2 an Environment for Creating Electronic Content in Software Engineering
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 3 Sofia 2009 Selbo 2 an Environment for Creating Electronic Content in Software Engineering Damyan Mitev 1, Stanimir
More informationSemantic Search in Portals using Ontologies
Semantic Search in Portals using Ontologies Wallace Anacleto Pinheiro Ana Maria de C. Moura Military Institute of Engineering  IME/RJ Department of Computer Engineering  Rio de Janeiro  Brazil [awallace,anamoura]@de9.ime.eb.br
More informationTableaux Modulo Theories using Superdeduction
Tableaux Modulo Theories using Superdeduction An Application to the Verification of B Proof Rules with the Zenon Automated Theorem Prover Mélanie Jacquel 1, Karim Berkani 1, David Delahaye 2, and Catherine
More informationTool Support for Software Variability Management and Product Derivation in Software Product Lines
Tool Support for Software Variability Management and Product Derivation in Software s Hassan Gomaa 1, Michael E. Shin 2 1 Dept. of Information and Software Engineering, George Mason University, Fairfax,
More informationCHANCE ENCOUNTERS. Making Sense of Hypothesis Tests. Howard Fincher. Learning Development Tutor. Upgrade Study Advice Service
CHANCE ENCOUNTERS Making Sense of Hypothesis Tests Howard Fincher Learning Development Tutor Upgrade Study Advice Service Oxford Brookes University Howard Fincher 2008 PREFACE This guide has a restricted
More informationAbstraction in Computer Science & Software Engineering: A Pedagogical Perspective
Orit Hazzan's Column Abstraction in Computer Science & Software Engineering: A Pedagogical Perspective This column is coauthored with Jeff Kramer, Department of Computing, Imperial College, London ABSTRACT
More informationA Slot Representation of the ResourceCentric Models for Scheduling Problems
A Slot Representation of the ResourceCentric Models for Scheduling Problems Roman Barták * Charles University, Faculty of Mathematics and Physics Department of Theoretical Computer Science Malostranské
More informationCOURSE RECOMMENDER SYSTEM IN ELEARNING
International Journal of Computer Science and Communication Vol. 3, No. 1, JanuaryJune 2012, pp. 159164 COURSE RECOMMENDER SYSTEM IN ELEARNING Sunita B Aher 1, Lobo L.M.R.J. 2 1 M.E. (CSE)II, Walchand
More informationBinary Coded Web Access Pattern Tree in Education Domain
Binary Coded Web Access Pattern Tree in Education Domain C. Gomathi P.G. Department of Computer Science Kongu Arts and Science College Erode638107, Tamil Nadu, India Email: kc.gomathi@gmail.com M. Moorthi
More informationProgram Visualization for Programming Education Case of Jeliot 3
Program Visualization for Programming Education Case of Jeliot 3 Roman Bednarik, Andrés Moreno, Niko Myller Department of Computer Science University of Joensuu firstname.lastname@cs.joensuu.fi Abstract:
More informationA Variability Viewpoint for Enterprise Software Systems
2012 Joint Working Conference on Software Architecture & 6th European Conference on Software Architecture A Variability Viewpoint for Enterprise Software Systems Matthias Galster University of Groningen,
More informationFacilitating Business Process Discovery using Email Analysis
Facilitating Business Process Discovery using Email Analysis Matin Mavaddat Matin.Mavaddat@live.uwe.ac.uk Stewart Green Stewart.Green Ian Beeson Ian.Beeson Jin Sa Jin.Sa Abstract Extracting business process
More informationMathematical Reasoning in Software Engineering Education. Peter B. Henderson Butler University
Mathematical Reasoning in Software Engineering Education Peter B. Henderson Butler University Introduction Engineering is a bridge between science and mathematics, and the technological needs of mankind.
More informationCollecting Polish German Parallel Corpora in the Internet
Proceedings of the International Multiconference on ISSN 1896 7094 Computer Science and Information Technology, pp. 285 292 2007 PIPS Collecting Polish German Parallel Corpora in the Internet Monika Rosińska
More informationOntological Representations of Software Patterns
Ontological Representations of Software Patterns JeanMarc Rosengard and Marian F. Ursu University of London http://w2.syronex.com/jmr/ Abstract. This paper 1 is based on and advocates the trend in software
More informationAbsolute Value of Reasoning
About Illustrations: Illustrations of the Standards for Mathematical Practice (SMP) consist of several pieces, including a mathematics task, student dialogue, mathematical overview, teacher reflection
More informationTowards Rulebased System for the Assembly of 3D Bricks
Universal Journal of Communications and Network 3(4): 7781, 2015 DOI: 10.13189/ujcn.2015.030401 http://www.hrpub.org Towards Rulebased System for the Assembly of 3D Bricks Sanguk Noh School of Computer
More informationDoes Artificial Tutoring foster Inquiry Based Learning?
Vol. 25, Issue 1, 2014, 123129 Does Artificial Tutoring foster Inquiry Based Learning? ALEXANDER SCHMOELZ *, CHRISTIAN SWERTZ, ALEXANDRA FORSTNER, ALESSANDRO BARBERI ABSTRACT: This contribution looks
More informationCONFIOUS * : Managing the Electronic Submission and Reviewing Process of Scientific Conferences
CONFIOUS * : Managing the Electronic Submission and Reviewing Process of Scientific Conferences Manos Papagelis 1, 2, Dimitris Plexousakis 1, 2 and Panagiotis N. Nikolaou 2 1 Institute of Computer Science,
More informationOperations Research and Knowledge Modeling in Data Mining
Operations Research and Knowledge Modeling in Data Mining Masato KODA Graduate School of Systems and Information Engineering University of Tsukuba, Tsukuba Science City, Japan 3058573 koda@sk.tsukuba.ac.jp
More informationData Mining for Knowledge Management in Technology Enhanced Learning
Proceedings of the 6th WSEAS International Conference on Applications of Electrical Engineering, Istanbul, Turkey, May 2729, 2007 115 Data Mining for Knowledge Management in Technology Enhanced Learning
More informationManaging Software Evolution through Reuse Contracts
VRIJE UNIVERSITEIT BRUSSEL Vrije Universiteit Brussel Faculteit Wetenschappen SCI EN T I A V INCERE T ENE BRA S Managing Software Evolution through Reuse Contracts Carine Lucas, Patrick Steyaert, Kim Mens
More informationIntroducing Formal Methods. Software Engineering and Formal Methods
Introducing Formal Methods Formal Methods for Software Specification and Analysis: An Overview 1 Software Engineering and Formal Methods Every Software engineering methodology is based on a recommended
More informationA Review of Data Mining Techniques
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 4, April 2014,
More informationCoverability for Parallel Programs
2015 http://excel.fit.vutbr.cz Coverability for Parallel Programs Lenka Turoňová* Abstract We improve existing method for the automatic verification of systems with parallel running processes. The technique
More information72. Ontology Driven Knowledge Discovery Process: a proposal to integrate Ontology Engineering and KDD
72. Ontology Driven Knowledge Discovery Process: a proposal to integrate Ontology Engineering and KDD Paulo Gottgtroy Auckland University of Technology Paulo.gottgtroy@aut.ac.nz Abstract This paper is
More informationGraduate Coop Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina
Graduate Coop Students Information Manual Department of Computer Science Faculty of Science University of Regina 2014 1 Table of Contents 1. Department Description..3 2. Program Requirements and Procedures
More informationMalay A. Dalal Madhav Erraguntla Perakath Benjamin. Knowledge Based Systems, Inc. (KBSI) College Station, TX 77840, U.S.A.
AN INTRODUCTION TO USING PROSIM FOR BUSINESS PROCESS SIMULATION AND ANALYSIS Malay A. Dalal Madhav Erraguntla Perakath Benjamin Knowledge Based Systems, Inc. (KBSI) College Station, TX 77840, U.S.A. ABSTRACT
More informationModeling the User Interface of Web Applications with UML
Modeling the User Interface of Web Applications with UML Rolf Hennicker,Nora Koch,2 Institute of Computer Science LudwigMaximiliansUniversity Munich Oettingenstr. 67 80538 München, Germany {kochn,hennicke}@informatik.unimuenchen.de
More informationBig Data Analytics of MultiRelationship Online Social Network Based on MultiSubnet Composited Complex Network
, pp.273284 http://dx.doi.org/10.14257/ijdta.2015.8.5.24 Big Data Analytics of MultiRelationship Online Social Network Based on MultiSubnet Composited Complex Network Gengxin Sun 1, Sheng Bin 2 and
More informationWinter 2016 Course Timetable. Legend: TIME: M = Monday T = Tuesday W = Wednesday R = Thursday F = Friday BREATH: M = Methodology: RA = Research Area
Winter 2016 Course Timetable Legend: TIME: M = Monday T = Tuesday W = Wednesday R = Thursday F = Friday BREATH: M = Methodology: RA = Research Area Please note: Times listed in parentheses refer to the
More informationOverview of the TACITUS Project
Overview of the TACITUS Project Jerry R. Hobbs Artificial Intelligence Center SRI International 1 Aims of the Project The specific aim of the TACITUS project is to develop interpretation processes for
More informationA Framework of ContextSensitive Visualization for UserCentered Interactive Systems
Proceedings of 10 th International Conference on User Modeling, pp423427 Edinburgh, UK, July 2429, 2005. SpringerVerlag Berlin Heidelberg 2005 A Framework of ContextSensitive Visualization for UserCentered
More informationBridging the Gap between Knowledge Management and ELearning with ContextAware Corporate Learning
Bridging the Gap between Knowledge Management and ELearning with ContextAware Corporate Learning Andreas Schmidt FZI Research Center for Information Technologies, Karlsruhe, GERMANY Andreas.Schmidt@fzi.de
More informationAdapting to the Level of Experience of the User in MixedInitiative Web SelfService Applications
Adapting to the Level of Experience of the User in MixedInitiative Web SelfService Applications Mehmet H. Göker Kaidara Software 330 Distel Circle, Suite 150 Los Altos, CA 94022 mgoker@kaidara.com Abstract.
More informationExploiting User and Process Context for Knowledge Management Systems
Workshop on User Modeling for ContextAware Applications at the 8th Int. Conf. on User Modeling, July 1316, 2001, Sonthofen, Germany Exploiting User and Process Context for Knowledge Management Systems
More informationTOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM
TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM ThanhNghi Do College of Information Technology, Cantho University 1 Ly Tu Trong Street, Ninh Kieu District Cantho City, Vietnam
More informationRisk Knowledge Capture in the Riskit Method
Risk Knowledge Capture in the Riskit Method Jyrki Kontio and Victor R. Basili jyrki.kontio@ntc.nokia.com / basili@cs.umd.edu University of Maryland Department of Computer Science A.V.Williams Building
More informationReusable Knowledgebased Components for Building Software. Applications: A Knowledge Modelling Approach
Reusable Knowledgebased Components for Building Software Applications: A Knowledge Modelling Approach Martin Molina, Jose L. Sierra, Jose Cuena Department of Artificial Intelligence, Technical University
More informationEFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP
EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP Hong Jiang Mathematics & Computer Science Department, Benedict College, USA jiangh@benedict.edu ABSTRACT DCSP (Distributed Constraint Satisfaction Problem) has
More informationDistributed Database for Environmental Data Integration
Distributed Database for Environmental Data Integration A. Amato', V. Di Lecce2, and V. Piuri 3 II Engineering Faculty of Politecnico di Bari  Italy 2 DIASS, Politecnico di Bari, Italy 3Dept Information
More informationIntegrating Pattern Mining in Relational Databases
Integrating Pattern Mining in Relational Databases Toon Calders, Bart Goethals, and Adriana Prado University of Antwerp, Belgium {toon.calders, bart.goethals, adriana.prado}@ua.ac.be Abstract. Almost a
More informationKNOWLEDGE ORGANIZATION
KNOWLEDGE ORGANIZATION Gabi Reinmann Germany reinmann.gabi@googlemail.com Synonyms Information organization, information classification, knowledge representation, knowledge structuring Definition The term
More informationArtificial Intelligence
Artificial Intelligence ICS461 Fall 2010 1 Lecture #12B More Representations Outline Logics Rules Frames Nancy E. Reed nreed@hawaii.edu 2 Representation Agents deal with knowledge (data) Facts (believe
More informationCourse Outline Department of Computing Science Faculty of Science. COMP 37103 Applied Artificial Intelligence (3,1,0) Fall 2015
Course Outline Department of Computing Science Faculty of Science COMP 710  Applied Artificial Intelligence (,1,0) Fall 2015 Instructor: Office: Phone/Voice Mail: EMail: Course Description : Students
More informationModelling and Implementing a Knowledge Base for Checking Medical Invoices with DLV
Modelling and Implementing a Knowledge Base for Checking Medical Invoices with DLV Christoph Beierle 1, Oliver Dusso 1, Gabriele KernIsberner 2 1 Dept. of Computer Science, FernUniversität in Hagen, 58084
More informationMEANINGS CONSTRUCTION ABOUT SAMPLING DISTRIBUTIONS IN A DYNAMIC STATISTICS ENVIRONMENT
MEANINGS CONSTRUCTION ABOUT SAMPLING DISTRIBUTIONS IN A DYNAMIC STATISTICS ENVIRONMENT Ernesto Sánchez CINVESTAVIPN, México Santiago Inzunza Autonomous University of Sinaloa, México esanchez@cinvestav.mx
More informationStabilization by Conceptual Duplication in Adaptive Resonance Theory
Stabilization by Conceptual Duplication in Adaptive Resonance Theory Louis Massey Royal Military College of Canada Department of Mathematics and Computer Science PO Box 17000 Station Forces Kingston, Ontario,
More informationExperiments in Web Page Classification for Semantic Web
Experiments in Web Page Classification for Semantic Web Asad Satti, Nick Cercone, Vlado Kešelj Faculty of Computer Science, Dalhousie University Email: {rashid,nick,vlado}@cs.dal.ca Abstract We address
More informationChapter II. Controlling Cars on a Bridge
Chapter II. Controlling Cars on a Bridge 1 Introduction The intent of this chapter is to introduce a complete example of a small system development. During this development, you will be made aware of the
More informationImplementing Systematic Requirements Management in a Large Software Development Programme
Implementing Systematic Requirements Management in a Large Software Development Programme Caroline Claus, Michael Freund, Michael Kaiser, Ralf Kneuper 1 Transport, Informatik und LogistikConsulting
More informationImproving KnowledgeBased System Performance by Reordering Rule Sequences
Improving KnowledgeBased System Performance by Reordering Rule Sequences Neli P. Zlatareva Department of Computer Science Central Connecticut State University 1615 Stanley Street New Britain, CT 06050
More informationUnderstanding and Supporting Intersubjective Meaning Making in SocioTechnical Systems: A Cognitive Psychology Perspective
Understanding and Supporting Intersubjective Meaning Making in SocioTechnical Systems: A Cognitive Psychology Perspective Sebastian Dennerlein Institute for Psychology, University of Graz, Universitätsplatz
More informationEastern Washington University Department of Computer Science. Questionnaire for Prospective Masters in Computer Science Students
Eastern Washington University Department of Computer Science Questionnaire for Prospective Masters in Computer Science Students I. Personal Information Name: Last First M.I. Mailing Address: Permanent
More informationCSE 459/598: Logic for Computer Scientists (Spring 2012)
CSE 459/598: Logic for Computer Scientists (Spring 2012) Time and Place: T Th 10:3011:45 a.m., M109 Instructor: Joohyung Lee (joolee@asu.edu) Instructor s Office Hours: T Th 4:305:30 p.m. and by appointment
More informationSCORM Users Guide for Instructional Designers. Version 8
SCORM Users Guide for Instructional Designers Version 8 September 15, 2011 Brief Table of Contents Chapter 1. SCORM in a Nutshell... 6 Chapter 2. Overview of SCORM... 15 Chapter 3. Structuring Instruction...
More informationContentbased Retrieval of Analytic Reports
Contentbased Retrieval of Analytic Reports Václav Lín 1, Jan Rauch 1,2, and Vojtěch Svátek 1,2 1 Department of Information and Knowledge Engineering, University of Economics, Prague, W. Churchill Sq.
More informationA QTI editor integrated into the netuniversité web portal using IMS LD
Giacomini Pacurar, E., Trigang, P & Alupoaie, S. (2005). A QTI editor integrated into the netuniversité web portal using IMS LD Journal of Interactive Media in Education 2005(09). [jime.open.ac.uk/2005/09].
More informationTowards SemanticsEnabled Distributed Infrastructure for Knowledge Acquisition
Towards SemanticsEnabled Distributed Infrastructure for Knowledge Acquisition Vasant Honavar 1 and Doina Caragea 2 1 Artificial Intelligence Research Laboratory, Department of Computer Science, Iowa State
More informationConsidering Learning Styles in Learning Management Systems: Investigating the Behavior of Students in an Online Course*
Considering Learning Styles in Learning Management Systems: Investigating the Behavior of Students in an Online Course* Sabine Graf Vienna University of Technology Women's Postgraduate College for Internet
More informationRapid Authoring of Intelligent Tutors for RealWorld and Experimental Use
Rapid Authoring of Intelligent Tutors for RealWorld and Experimental Use Vincent Aleven, Jonathan Sewall, Bruce M. McLaren, Kenneth R. Koedinger HumanComputer Interaction Institute, Carnegie Mellon University
More information