A Platform for Output Dialogic Strategies in Natural Multimodal Dialogue Systems
|
|
- Derick August Gibson
- 8 years ago
- Views:
Transcription
1 A Platform for Output Dialogic Strategies in Natural Multimodal Dialogue Systems Meriam Horchani 12 Laurence Nigay 1 Franck Panaget 2 1 France Télécom R&D Lannion Cedex, France. 2 CLIPS-IMAG Université Grenoble Grenoble, France laurence.nigay@imag.fr ABSTRACT The development of natural multimodal dialogue systems remains a very difficult task. The flexibility and naturalness they offer result in an increased complexity that current software tools do not address appropriately. One challenging issue we address here is the generation of cooperative responses in an appropriate multimodal form, highlighting the intertwined relation of content and presentation. We identify a key component, the dialogic strategy component, as a mediator between the natural dialogue management and the multimodal presentation. This component selects the semantic information content to be presented according to various presentation constraints. Constraints include inherent characteristics of modalities, the availability of a modality as well as preferences of the user. Thus the cooperative behaviour of the system could be adapted as could its multimodal behaviour. In this paper, we present the dialogic strategy component and an associated platform to quickly develop output multimodal cooperative responses in order to explore different dialogic strategies. ACM Classification: H.5.2 [Information Interfaces and Presentation]: User Interfaces - prototyping, user-centered design; D.2.2 [Software Engineering]: Design Tools and Techniques user interfaces. General terms: Algorithms, Human Factors Keywords: Multimodal outputs, dialogic strategies, user adaptation, software architecture 1. INTRODUCTION On the one hand, the computer can be a tool [3]: the user is performing actions using the system, as for WIMP and post-wimp interfaces. On the other hand, the computer can play the role of a partner [3]: the user and the system are acting together. Dialogue systems fall into this category, Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. IUI 07, January 28 31, 2007, Honolulu, Hawaii, USA. Copyright 2007 ACM /07/ $5.00. moving "beyond the command/control oriented interface, where the user initiates all actions, to one that is more modelled after communication, where the context of the interaction has a significant impact on what, when, and how information is communicated" [15]. We focus on the second category of systems by studying multimodal dialogue systems. These systems support two or more modalities for input and for output to match the natural communication means of human beings. For defining a modality, we adopt a system-oriented perspective and we define a modality (input or output) as the coupling [d, L] of a physical device d with an interaction language L [10] 1. A physical device is an artifact of the system that acquires (input device) or delivers (output device) information. Loudspeaker and screen are examples of output devices. An interaction language defines a set of well-formed expressions (i.e., a conventional assembly of symbols) that convey meaning. The generation of a symbol, or a set of symbols, involves actions on physical devices. Pseudonatural language and graphical animation are examples of interaction languages. We distinguish several types of multimodal systems by their use of modalities based on the CARE (Complementarity, Assignment, Redundancy, Equivalence) properties [11]. Modalities can be: used in a Complementary or Redundant way, Assigned to a particular presentation task, or Equivalent for a given presentation task. The computational processes dedicated to analyze multimodal inputs are different from those required to produce multimodal outputs. We limit our study to multimodal output generation that generally involves determining the next answer of the system (step 1), selecting the appropriate modalities and forms of multimodality (step 2) and producing the concrete outputs (step 3). Our work specifically focuses on dialogic strategies for natural multimodal dialogue. A natural dialogue cannot be 1 A human-centered perspective would lead us to define an input modality based on a human action ability and an output modality based on the involved human sense.
2 restricted to a dialogue in natural language: it implies at least negotiation ability, contextual interpretation, flexibility of interaction, capability to generate cooperative responses, and adequacy of the response style [14]. For defining cooperative responses in the context of information systems, a dialogic strategy must be chosen in order to communicate to the user a maximum of pertinent information (step 1). Furthermore, the choice of a dialogic strategy must take into account presentation constraint in order to defining an adequate multimodal presentation (step 2). Presentation constraints include inherent characteristics of output modalities (e.g. maximum number of items that can be presented by a spoken message) and devices (e.g. size of a screen), the availability of a modality as well as choices or preferences of the user (e.g. the user utters the sentence "tell me the phone number of Kate" so that s/he express that s/he prefers spoken output). More constraints described as metrics to assess the desirability of a presentation are presented in [19]. As stated in [12] and [4], step 1 and step 2 must be at least partly decided in parallel for defining natural multimodal responses: this blurs the distinction between the content and its presentation. Therefore the dialogic strategy serves a mediator between the dialogue management (step 1) and the presentation (step 2). Even if we propose a dedicated component to select the appropriate dialogic strategy, few studies deal with the issue of the suitability of a dialogic strategy and no guidelines are available today for designing a dialogic strategy. To address this issue, we present a software platform for exploring different dialogic strategies: a high modularity allows this platform to enable rapid development of multimodal cooperative responses. The structure of the paper is as follows. First we recall the main steps of a multimodal presentation generation process in order to underline the scope of our platform and its components. Then we present our platform focusing on the mediator role of the dialogic strategy software component. Before examining related work, we present an example that illustrates the use of the platform. This example is based on our dialogue whose main features are presented in the next section is an internal enterprise directory prototype. The system allows an enterprise staff to find information about their co-workers (e.g. last names, photos, addresses, phone numbers, office numbers, sites, teams), about the teams (e.g. fax numbers, acronyms, full names, descriptions) and about the locations of enterprise sites (e.g. towns, countries, access can be used on a personal computer as well as on a mobile phone as shown in Figure 1. Our work focuses on the interfaces on mobile is a multimodal information system. For input, the user can speak (natural language speech queries), click on is a French acronym for intelligent multimodal enterprise directory. "Amie" in French means friend. hypertext links and type queries and commands using the keyboard. Then the system can produce spoken outputs (M1=[loudspeakers, natural language]), text including hypertext links (M2=[screen, text/hypertext]) and graphics (M3=[screen, graphics]) such as maps or photos. The system also provides coordinated simultaneous outputs combining speech and text. For example if a user asks for Kate s phone number, the system generates the following coordinated outputs: a list of (Kate, Last name, Phone number, links to person's card) displayed on screen (M2) along with the spoken message "There are N Kate" (M1). Here, M1 and M2 are used in a complementary way. Figure running on a mobile phone. Focusing on output four initial dialogic strategies are identified depending on the number of responses to be presented to the user: Case 1 - the dialogic strategy when there is no solution: the system suggests to the user alternative solutions or, failing that, alternative search criteria. For example, the user asks for Kate Rabbot s phone number: the system informs the user that it does not know a Kate Rabbot but that it knows Kate Rabbit; Case 2 - the dialogic strategy when there is one and only one solution: the system presents the solution and additional information. For example, the user asks for Kate Rabbit s phone number: the system informs the user of Kate Rabbit's phone number and provides additional information, such as Kate Rabbit's mobile phone number; Case 3 - the dialogic strategy when there are some solutions: the system presents the solution list without any additional information. For example, the user asks for Kate s phone number: the system informs the user that it has several responses and it presents the list of Kate's phone numbers; Case 4 - the dialogic strategy when there are many of solutions: the system suggests to the user possible criteria to restrict the solution set. For example, the user asks for Peter s phone number: the system informs the user that there are several responses and suggests that the user specify the last name or the team. Amongst these four strategies, the key issue is the distinction between "some solutions" and "many of solutions". In a
3 first version the choice between the two corresponding dialogic strategies (Case 3 and Case 4) was performed independently of any presentation constraints. More precisely, this first version was implemented using the ARTIMIS technology [14]. An ARTIMIS-based system uses an inference engine to answer to the user in a cooperative way. It generates communicative acts providing not only possible answers to the user's request, but also related information (e.g. additional information for an answer to the user's request, or the criteria along with some values for restricting the initial user's request if there are too many solutions). These acts comply with the FIPA-ACL standard [6]. these generated communicative acts are treated by a specific algorithm allowing their modality assignment and sending the corresponding presentation tasks (i.e. "present this information using this modality") to the suitable presentation devices. This method has two main disadvantages. On the one hand, the system's presentation is predetermined and presentation constraints cannot influence the distinction between "some solutions" and "many of solutions". So we want to implement this mutual influence between presentation and content. On the other hand, there are no guidelines about the suitability of dialogic strategies to presentation constraints. So we need to be able to easily change the choice of dialogic strategies in order to allow testing of their adherence to presentation constraints. To achieve our aims, we propose a dialogic strategy component and an associated platform, described in this paper. In the context we will describe the dialogic strategy component and we will explain how our platform can help explore several dialogic strategies based on presentation constraints by showing how rapidly several alternative strategies can be implemented and tested. Before presenting the dialogic strategy component and the platform, we describe the main steps of a multimodal presentation generation process. 3. MULTIMODAL GENERATION PROCESS The main steps of a multimodal presentation generation process are presented in the Reference Model (RM) for IMMPS (Intelligent MultiMedia Presentation Systems) [4]. The RM is independent of implementation issues and consists of a functional decomposition of the generation process into layers. Five layers are identified: Control layer: this layer groups components which decide on presentation goals; Content layer: from the control layer results, the content layer is responsible for goal refinement, content selection, modality allocation and information ordering; Design layer: to provide a link between abstract representation of the multimodal output and concrete output objects, the design layer uses the content layer results in order to define modality objects to be presented and possibly their spatial and temporal arrangement; Realization layer: based on the design plan specification produced by the design layer, this layer therewith generates concrete output objects and their spatial and temporal relationships; Presentation display layer: this layer has the responsibility of making the concrete output objects perceivable by the user while maintaining the spatial/temporal relationships between the objects (coordination function). Based on these five layers, the generation process is not sequential: each layer can constrain the previous layers. Consequently feedback between layers is necessary. Moreover output coordination is central for obtaining a coherent global presentation. The coordination results from the generation of referring expressions [2] as well as temporal and spatial coordination [2,7]. Referring expressions are not studied in the RM. Furthermore, depending on the level of coordination, output coordination can take place in each layer. For example, using an object name or its corresponding grammatical pronoun could be decided during the realization layer or during the presentation display layer. We illustrate the five layers of the RM The aim of the presentation is to indicate Mrs. Rabbit's office on a map. The Control layer translates the goal into terms understandable by the generation process: [present <Mrs. Rabbit's office>]. From this goal, the Content layer then derives information pieces associated with a particular modality or at least with a language (a modality being defined by the coupling of a language with a device [d, L]). For example: a sentence to recall the user's request and to focus user's attention on the map and a deictic to indicate the response on the map. Then the design layer translates these information pieces into modality objects selecting the devices: ["The office of Mrs. Rabbit is here", loudspeaker] and [a red point flickering and a static plan, screen]. The temporal relationships between the two modality objects are described as coincident in the framework described in [17], in other words "starting at the same time". The Realization layer produces the two corresponding displayable modality objects, based on the modality objects. Finally the Presentation display layer runs the presentation resulting from the two displayable modality objects and their temporal relationships. Consequently the user hears the spoken message "The office of Mrs. Rabbit is here" while a red point is flickering on the screen. In the first version the Control layer, the goal refinement and the content selection (Content layer) are decided by the ARTIMIS technology. An ad-hoc algorithm is in charge of the modality allocation, the information ordering (Content layer) and the Design layer. The Realization and Presentation design layers are run by the appropriate devices. The multimodal generation is sequential. Yet, the RM stresses that the layers are not working independently and that backtracking is possible between layers. In particular, multimodal presentation concretization could influence the goal refinement and the content allocation of the Content layer. That's why it seems necessary to distinguish clearly these steps from the Control layer. This distinction does not exist in the ARTIMIS technology, and it is generally overlooked in most architectures.
4 Studying dialogic strategies as mediators between the content and its presentation, we mainly focus on the role of the Content layer in a software architecture and its intertwined relation with the control layer. 4. A COMPONENT FOR SIMULTANEOUSLY SELECTING CONTENT AND PRESENTATION To sum up, we aim at: (1) allowing a rapid development of multimodal cooperative responses in order to test the adequacy of chosen dialogic strategies and (2) to underline and implement the influence of the presentation on the content. To satisfy our first goal, our work is based on ARCH [16], a reference software meta-model. This highly modular architecture offers the established advantages of reducing the production costs, and of verifying the software engineering properties of reusability, maintainability and evolution. As shown in Figure 2, the ARCH meta-model includes five components: The Domain-specific component implements domain specific concepts in a presentation independent way; The Domain-adaptor component maps domain objects from the Domain-specific component onto conceptual objects from the Dialogue component and vice versa; The Dialogue component is the keystone of the model and is dedicated to the management of the dialogue. Generally, it realizes the Control layer and a part of the Content layer (i.e. goal refinement and content selection); The Presentation component maps presentation objects from the Dialogue component onto interaction objects from the Interaction toolkit component. So it implements a part of the Content layer (i.e. modality allocation and information ordering) and the Design layer; The Interaction toolkit component implements the concrete interaction with the user. In other words, it corresponds to the Realization layer and the Presentation display layer. If we compare the first version to the ARCH meta-model, the ARTIMIS technology implements the Domain-specific component, the Domain-adaptor component and the Dialogue component, while the ad-hoc algorithm for goal refinement and modality allocation represents the Presentation component. Devices toolkits compose the Interaction toolkit component. ARCH makes a clear distinction between the dialogue management and the output generation. But, as the RM underlines, the distinction between content and presentation is not trivial. Output personalization stresses this interrelation, particularly if it is not reduced to a surfacing personalization. Considering our illustrative example, the distinction between "some solutions" and "many of solutions" should depend on presentation constraints, which are generally treated by the Presentation component, or even by the Interaction toolkit component. So, the dialogic strategies must depend not only on found solutions but also on inherent characteristics of output modalities and devices, the availability of a modality, choices or preferences of the user, etc. For example, in the first version the user cannot demand spoken output. And even if it was possible, s/he will obtain a spoken message enunciating the solutions if s/he asks about the phone number of Peter. Of course, this answer may lead the user to cognitive overload: the system should not present more than three pieces of information by a spoken message. Domain-adaptor component Domain-specific component Dialogue component Presentation component Interaction toolkit component Dialogic strategy component Figure 2: The dialogic strategies within ARCH. Consequently, if we want to satisfy our second goal, i.e. that the system's answers take into account presentation constraints, it seems necessary to select, at least partly, content and presentation simultaneously. To do that, we propose a key component, the dialogic strategy component. As shown in Figure 2, it is a mediator between the dialogue component and the multimodal presentation generation components. This component handles content produced from the Dialogue component as well as presentation constraints conveyed by Presentation and Interaction toolkit components. It realizes the Content layer of the RM. We illustrate this role by considering an example that focuses on the two last strategies of Section 2: Case 3 - "some solutions" and Case 4 - "many of solutions". The user is looking for Kate's phone number and the system finds five responses. Experimental studies performed at France Télécom R&D have shown that reading five options is possible without cognitive overload, as opposed to a spoken message enunciating the five options that may lead to a cognitive overload (maximum three options presented by a spoken message). As explained in Section 2, immediate coordinated outputs could be a list of links to persons' cards displayed on screen along with the spoken message "There are five Kate". Now let us consider that the user explicitly asks for spoken output. The dialogic strategy must be adapted: an alternative strategy may consist of asking the user to restrict the solution set. In our example, a possible restrictive criterion is the team of the person. The restricting criteria are defined by the Dialogue component along with the content of the responses. Then the system asks the user to specify Kate s team by a spoken message. Likewise, if we now consider the case of a very small screen, displaying the list of persons' cards may be difficult. Again the dialogic strategy can be adapted. The same strategy as for the previous example can be applied using different modalities. For example instead of a spoken message asking the user to specify Kate's team, the teams in which a person called Kate is working can be displayed on screen while a spoken message enunciates the number of responses.
5 To assess the correctness of the Dialogic strategy component, it is necessary to integrate it into an ARCH-based software platform. Few studies deal with the issue of the suitability of a dialogic strategy. Therefore our platform must not only implement the Dialogic strategy component but also enable rapid development of multimodal cooperative answers in order to explore different dialogic strategies. We describe this platform in the next section. the ICARE components. From this high level specification, the code of the input multimodal interface is automatically generated. For multimodal output interfaces, the ICARE components are manually assembled since the editor has not yet been extended in order to include ICARE output components [9]. 5. A PLATFORM FOR EXPLORING DIALOGIC STRATEGIES Allowing a rapid development of multimodal cooperative responses in order to test the adequacy of chosen dialogic strategies is our first goal. To satisfy this, our platform is based on the ARCH meta-model. To implement the Domain-specific component, the Domain-adaptor component and the Dialogue component, we choose to reuse a large part of the ARTIMIS technology. Usually, the inference of the ARTIMIS technology produces a plan of communicative acts. For our needs, the inference is interrupted and we retrieve all semantic information which is relevant to build the system's answer. This result is a communicative goal, corresponding to a set of potential presentation goals. For example, if the user asks for Kate s phone number, our Dialogue component produces a communicative goal composed of the list of solutions including related information (the first name, the name, the mobile phone and the office number): (potentialcontent :userrequest s0 :solutionset (set (solution fname:kate name:rabbit phone: mobile: office:lb150) (solution ) ) :restrictionsequence (sequence team name) ) with s0 a description of the user's request (Kate s phone number here). Consequently, our Dialogue component is limited to the control layer. In addition, it is necessary to simplify the three last layers of the RM in order to focus on the Content layer. Hence, we choose to implement the Output components (i.e. the Presentation component and the Interaction toolkit component) with the ICARE (Interaction CARE) components [9]. ICARE allows the easy and rapid development of multimodal interfaces [5]. ICARE framework is based on a conceptual component model that describes the manipulated software components. Elementary ICARE components include device and language components for defining a modality, while the composition ICARE components are based on the CARE properties [11]. ICARE components are assembled in order to specify the multimodal interface for a presentation task. Figure 3 shows three ICARE specifications for the presentation task "Present the number of solutions": it could be presented visually, in a multimodal way, or by a spoken message. For multimodal input interfaces, a graphical editor has been developed that enables the designer to graphically assemble Figure 3: ICARE specifications for the presentation task "present the number of solutions". The Dialogic strategy component aims at satisfying our second goal. In order to underline and implement the intertwined relation of the content to be presented with the multimodal outputs, it acts as an intermediary to the Dialogue component and the Output components. It is responsible for defining multimodal output based on the relevant semantic information provided by the Dialogue component as well as presentation constraints generally handled by the Output components. It manages at the same time the content selection and the presentation allocation. Figure 4 presents the resulting platform. It specifies software technologies chosen to implement each component. Dialogue component (a part of ARTIM IS) Possible contents (communicative goals) Chosen contents (communicative acts) Dialogic strategy component (rules) Presentation specifications (modality-allocated communicative acts = ICARE specifications) Output components (ICARE components) Figure 4. Platform for exploring dialogic strategies. At runtime, the multimodal generation process is as follows. Given a user's request, the Dialogue component runs and plans all possible answers and relevant additional information. These alternatives are transmitted to the Dialogic strategy component using a communicative goal. They are not modality-allocated.
6 According to presentation constraints, the Dialogic strategy component selects the communicative acts among the potential presentation goals of the communicative goal. Presentation constraints are implemented as external resources used by the Dialogic strategy component. On the one hand, an explicit choice by the user (e.g. the user utters the sentence "tell me Kate s phone number") can be provided to the Dialogic strategy component by the Dialogue component. On the other hand, the available modalities (e.g. offstate of a device) and their characteristics can be provided by the ICARE components. Moreover user's preferences and handicaps can be defined by a user profile manager. The Dialogic strategy component specifies the chosen presentation to the ICARE output components. It corresponds to modality-allocated communicative acts, in other words to allocated-modality elementary semantic propositions, interpreted by the Output components as presentation tasks. In addition, the Dialogic strategy component conveys the chosen contents to the Dialogue component using communicative acts but the selected modalities are not specified since the ARCH Dialogue component is modalityindependent. This feedback is necessary in order to maintain an accurate dialogic context. For example, if the user asks for Kate s phone number without presentation constraint, the Dialogic strategy component sends to the ICARE output components the following modality-allocated communicative act: (inform :sender system :receiver user :content (info :object (set (solution fname:kate name:rabbit ) ) :modality hypertext)) And it sends to the Dialogue component the following (not modality-allocated) communicative act: (inform :sender system :receiver user :content (info :object (set (solution fname:kate name:rabbit ) ))) Even if our Dialogue component is implemented using the ARTIMIS technology, this component can also be a simpler Dialogue controller for example managing SQL database queries and responses. The richness/complexity of the dialogic strategies that can be implemented depends on the Dialogue component. If the chosen Dialogue component does not produce communicative goals, its results must be translated into a simple list of semantic propositions understandable by the Dialogic strategy component. So far in this paper we have particularly focused on explicit choices made by the user: these choices are memorized during the interpretation of the user's request. The choice of strategies is implemented as a rules engine. Rules are based on the possible contents and user's choices and lead to the selection of a plan of predefined ICARE specifications. We use the JSA framework [8] to implement the rules engine. This framework allows identifying particular patterns in the received communicative goal. So, we can retrieve the values of meta-references prefixed by a double question mark: Term contentp = SLPatternManip.fromTerm("(potentialContent :userconstraint??userconstraintvalue :solutionset??solutionsetvalue )"); Constant answermodeinput = (Constant) contentmatch.term("userconstraintvalue"); TermSet solutionsetinput = (TermSet) contentmatch.term("solutionsetvalue"); And we use them to select the adequate rule: if ( answermodeinput. stringvalue().equals("visual") ) { if (solutionsetinput.size() > 5 ) { } else { } } else { } The answer is composed by at least one communicative act. The content of these communicative acts is built using the analysis of the retrieved meta-references. The rules introduce a new level of multimodal composition. Indeed, ICARE specifications enable composition of modalities for a particular presentation task. Each presentation task realizes a particular communicative act. But a cooperative system's reaction generally consists of more than only one communicative act. So, the composition of communicative acts is on a semantic level. For our needs, we identify the two following semantic compositions 3 : Two presentation tasks or presentation tasks plans are equivalent when only one is presented and the choice of one of them depends on the considered presentation constraints to convey a cooperative system's reaction; Two presentation tasks or presentation tasks plans are complementary when both are needed to convey a particular cooperative system's reaction. Even if this semantic composition is modality-independent, it contributes in producing a multimodal presentation since the involved presentation tasks may be presented along different modalities. Some examples of these semantic compositions are presented in the following section. Today, there are no ergonomic guidelines for designing acceptable cooperative dialogic strategies. Our platform allows a rapid development of such strategies. For a given dialogue system, the Dialogue component results are translated into a communicative goal understandable by the Dialogic strategy component. Since it is easy to change the pure and combined modalities using ICARE, the Dialogic strategies can be easily modified by adding/changing rules involving new ICARE specifications. To sum up, we propose an extended version of the ARCH architecture model that advocates a Dialogic strategy component in between the Dialogue component and the two Output components. In light of the RM of Section 3, the Control layer is realized by the Dialogue component while the Content layer corresponds to the Dialogic strategy com- 3 Other semantic compositions are probably possible.
7 ponent and the three last layers to the ICARE components. Based on this architecture, we have explained the components of our platform and the information exchanged between components. We now illustrate the use of the platform by considering our information 6. FROM MODEL TO REALITY: INTERACTING To illustrate the use of the platform, we consider three cases illustrating different levels of complexity for the Dialogic strategy component. For these three cases, the system has several responses to the user's request, i.e. there is more than one solution. So, the communicative goal is the same for all cases: it consists of the list of solutions including additional information and possible criteria to restrict the solution set. The chosen dialogic strategies are only examples and studies could demonstrate their inadequacy. request using keyboard by a text area on screen 4. This is the default multimodal presentation, without considering presentation constraints. In this case, the Dialogic strategy component simply identifies that there is more than one solution. It does not perform a choice in terms of dialogic strategies. The Dialogic strategy component therefore simply receives the communicative goals and transforms them in terms understandable by the ICARE components. In this application, we choose to execute in parallel the presentation tasks on loudspeakers and on screen. In our approach, the spatial/temporal relationships between the objects are decided at the Content layer of the RM. Figure 6. The rule specification for the case of several responses, considering the presentation choice made by the user. Second, we extend our first case by considering that the user can specify if s/he wants the responses presented by spoken messages or displayed on screen. As shown in Figure 6, three presentation plans are defined: Figure 5. The rule by default for the case of several responses and its corresponding multimodal presentation ICARE specifications. First we consider a simple case. The system presents to the user the list of solutions using combined modalities: a spoken message indicating the number of solutions and the list of responses displayed on screen. This design solution is based on the hypothesis that the user can read more information than s/he can hear. As shown in Figure 5, this reaction includes two communicative acts in order to answer to the user's request, each of them presented along one modality. In addition, the system invites the user to focus her/his the multimodal presentation by default; a monomodal graphical presentation that presents the number of solutions and the list of solutions displayed on screen (even if the screen is too small) and a text area to invite the user to specify a request; a monomodal spoken message that enunciates to the user the number of solutions and alternative search criteria as well as a text area to invite the user to focus her/his request. Based on rules, the Dialogic strategy component selects the presentation plan. The rules are based on the user's choices as parameters (e.g. "visual output", "no constraint" corresponding to a multi-sensorial perception or "spoken output") in order to select the appropriate ICARE specification. Even if the user prefers a particular presentation for outputs, it is always possible to interact with the system using speech or by direct manipulation: that is why there is always a text area for specifying a request. 4 The user can also refine her/his request using speech.
8 Figure 7. The rules specification for the case of several responses, considering the presentation choice made by the user as well as inherent modality characteristics. The rules include three sub-specifications corresponding to the three presentation plans, linked by a semantic Equivalence component. The Dialogic strategy component sends a message to the control port of the ICARE Equivalence component for selecting one of the three presentation plans. Even though the three presentation plans are considered as equivalent, they correspond to different strategies. Moreover according to the selected strategy, the Dialogic strategy component does not send the same communicative acts back to the Dialogue component for informing it of the selected content presented to the user. In the context of this simple example, we note that the Dialogic strategy component is able to select both content and presentation: it does not simply select output modality options. Lastly, in addition to choices performed by the user, we also consider inherent characteristics of modalities. For example, based on experimental results performed at France Télécom R&D, we define that a maximum of five elements can be displayed on screen while a maximum of three elements can be presented by a spoken message in order to be correctly perceived by the user without cognitive overload. As a consequence, in addition to the previous strategies, we consider two new strategies. As shown in Figure 7, possible dialogic strategies are as follows: If there is no user's constraint, multimodal presentation by default is defined as before; If the user wants information presented by spoken outputs and the number of solutions is less than three, the system presents the number and the list of solutions. If there are more than three solutions, the system proposes to the user to focus her/his request using one of the alternative search criteria by a spoken message: it is the same strategy as in the previous case "spoken output user s constraint"; If the user wants information visually and the number of solutions is less than five, the system displays on screen the number and the list of solutions: it is the same strategy as in the previous case "visual output user s constraint". If the number of solutions is more than five, the system displays on screen the number of solutions and the list of alternative search criteria. Although Figure 7 seems complex since we present an overview of all the possibilities including the presentation tasks described as ICARE specifications, for this example the user of our platform needs to write six rules and the multimodal interface designers need to specify using ICARE the different presentation tasks. These examples illustrate how our platform can be used to explore several dialogic strategies that can then be tested by experimental studies. For changing the strategies, rules must be changed/added in the Dialogic strategy component and the assembling of ICARE components modified. These changes can be rapidly performed especially if they consist of adding new compositions, both at the semantic level and at the ICARE specification level. For our new version developed using the described platform, we use the implemented Dialogue component and retrieve resulting communicative goals. So far we focused on the distinction between "some solutions" and
9 "many of solutions" and we only treated requests about a person or her/his properties. When there is more than one solution, we consider the number of solutions, the user s constraint, the inherent characteristics of modalities and the goal of the request (a person including her/his first name and/or last name and her/his photo or a property address, phone numbers, office number, site or team). The corresponding rules define seven different presentation plans. To realize these plans, we need twenty-two different presentation tasks described as ICARE specifications. Before concluding and presenting future work, we study related work. 7. RELATED WORK The development of multimodal natural dialogue systems is mainly ad-hoc and a few tools aim at supporting the development of these systems. In [12], a conceptual software architecture for Intelligent MultiMedia Presentation Systems (IMMPS) is presented. The architecture is also based on the ARCH reference model. The authors emphasize that "it will be necessary for [multimedia/multimodal presentation] components to be better coordinated with those for content selection because the latter can be influenced by constraints imposed by (1) the size and the complexity of presentation, (2) the quantity of information, (3) the limitation of the display hardware [ ] and (4) the need of presentation completeness and coherence." Our platform therefore extends this work by addressing the identified future issues. Moreover, while the described architecture is a conceptual one, we go one step further by providing an implementation that includes an additional software component dedicated to dialogic strategies. More recently, the WWHT (What, Which, How, Then) model [13] is a conceptual model and a software platform for the design and development of output multimodal interactive systems. The output generation process is organized along the four questions: "what information?", "which modalities?", "how?" and "then?". On the one hand, the WWHT platform focuses on automatic adaptation and evolution, issues that we do not treat in our platform. On the other hand, the WWHT platform assumes that the content ("what") is fixed. As a consequence, presentation constraints are treated by the "which" question, but they do not influence the system dialogic strategy. By addressing different issues the two platforms are complementary. In [1], an architecture for "more realistic conversational" systems with speech as inputs as well as speech and graphics as outputs is described. Communicative acts describing the system's reaction are defined thanks to three collaborative components: an interpretation manager, a generation manager and a behavioural agent. The generation manager serves as a mediator between the behavioural agent that corresponds to our Dialogue component, and the components which realize the presentation. So, it has a similar role as our Dialogic strategy component. Nevertheless the behavioural agent is not dependent on presentation constraints and therefore the content of the system's reaction does not take into account presentation constraints. There is no feedback from the generation manager to the behavioural agent. Furthermore the proposed architecture does not allow rapid changes in the implemented dialogue for exploring design alternatives but focuses on an implementation of the architecture in which some components may be re-used for developing natural language systems. Finally in [18] and [19] a framework for multimedia conversation system called Responsive Information Architect (RIA) is presented. As shown in the overall architecture of RIA [18], the content selection and the media allocation are two distinct core components. Our work focuses on the links between these two core components by providing a platform for exploring the mutual impact between content and presentation. In [19], they focus on the component of the framework dedicated to dynamically selecting the media that best express a content. The solution is based on a graph-matching approach and metrics that correspond to constraints. In our work we let the designers define the media (ICARE specifications). Our platform dedicated to exploring dialogic strategies can be used to test constraints and consequently to identify new metrics that can be incorporated in the RIA platform since the RIA media allocation approach is easily extensible. Furthermore in [18], they focus on the other part of the RIA framework that dynamically selects data content to be presented. In our platform data content is defined by the ARTIMIS technology (an inference engine) but the resulting content can be modified by our Dialogic strategy component. In RIA, the selection of the content is based on feature-based metrics including metrics for content quality or relevance. Two sets of metrics are directly related to our Dialogic strategy: media relevance and presentation cost metrics. Our platform may help to explore the impact of such metrics on the final presentation. In our approach we provide a way to explore metrics that are used by both the content data and the media allocation processes of RIA. We focus on the mutual impact between content and presentation: a claim confirmed by the results of the evaluation of RIA with designers [18]. 8. CONCLUSION AND FUTURE WORK We have presented a conceptual software architecture model as well as a platform for exploring different dialogic strategies by enabling rapid development of multimodal cooperative responses. The platform offers a high modularity, distinguishing the Dialogue component from the Dialogic strategy one and is based on the ICARE platform for multimodal presentation concretization. The contribution of our platform is two-fold: (1) it allows a rapid development of multimodal cooperative responses in order to test the adequacy of chosen dialogic strategies and (2) it underlines and implements the influence of the presentation on the content by identifying a dedicated component for dialogic strategies. As on-going work, we are developing a graphical tool for the specification of the Dialogic strategy component rules
10 and the definition of the links with the ICARE graphical platform dedicated to presentation tasks. In a future study, we plan to collaborate with ergonomists. In order to evaluate the effectiveness of the platform for iterative design of natural multimodal dialogue systems, we will consider other rules in the context of the design on mobile phones. Several design solutions will be developed using the platform and then evaluated. This collaboration will also allow enrichment of the platform by considering other presentation constraints. ACKNOWLEDGMENTS This work is partly supported by the European project EMODE (EUREKA ITEA n 04046). Thanks to C. Manquillet and F. Duclaye for the first version of system (and for the "Rabbit" example!) and thanks to A.-C. Prevost, M. Ochs and G. Serghiou for their proofreading. REFERENCES 1. Allen, J., Ferguson, G., and Stent, A. An architecture for more realistic conversational systems. In Proc. IUI 2001, pp André, E. The generation of multimedia presentations. In A handbook of natural language processing: techniques and applications for the processing of language as text, Marcel Dekker Inc., Beaudouin-Lafon, M. Designing interaction, not interfaces. In Proc. AVI 2004, pp Bordegoni, M., Faconti, G., Feiner, S., Maybury, M. T., Rist, T., Ruggieri, S., Trahanias, P. and Wilson, M. A standard reference model for intelligent multimedia presentation systems. Computers standards and interfaces 18, 6-7 (Dec. 1997), pp Bouchet, J., Nigay, L., and Ganille, T. ICARE Software Components for Rapidly Developing Multimodal Interfaces. In Proc. ICMI 2004, pp Experimental specification XC00037H, Foundation for Intelligent Physical Agents, Foster, M. E. State of the art review: multimodal fission. Public deliverable 6.1, COMIC project, Mansoux, B., Nigay, L., and Troccaz, J. Output Multimodal Interaction: The Case of Augmented Surgery. In Proc. HCI 2006, pp Nigay, L., and Coutaz, J. A Generic Platform for Addressing the Multimodal Challenge. In Proc. CHI 1995, pp Nigay, L., and Coutaz, J. Multifeature Systems: the CARE Properties and their Impact on Software Design. In Intelligence and Multimodality in Multimedia Interfaces: Research and Applications, AAAI Press, Roth, F. and Hefley, W. E. Intelligent multimedia presentation systems: research and principles, In Intelligent multimedia interfaces, AAAI Press/The MIT Press, Rousseau, C., Bellik, Y., Vernier, F. and Bazalgette, D. A framework for the intelligent multimodal presentation information. Signal Processing Journal, Special issue on Multimodal Interfaces, Elsevier, Sadek, D. Design considerations on dialogue systems: from theory to technology - the case of ARTIMIS. In Proc. IDS 1999, pp Turk, M. Multimodal Human Computer Interaction. In Real-time vision for human-computer interaction, Springer-Verlag, The UIMS Tool Developers Workshop. A Metamodel for the Runtime Architecture of an Interactive System. SIGCHI bulletin 24, 1 (Jan. 1992), pp Vernier, F., and Nigay, L. A Framework for the Combination and Characterization of Output Modalities. In Proc. DSVIS 2000, pp Zhou, M., and Aggarwal, V. An optimization-based approach to dynamic data content selection in intelligent multimedia interfaces. In Proc. UIST 2004, pp Zhou, M., Wen, Z., and Aggarwal, V. A graphmatching approach to dynamic media allocation in intelligent multimedia interfaces. In Proc. IUI 2005, pp Louis, V., and Martinez, T. JADE semantics framework. In Developing multi-agent systems with JADE, Wiley. To appear.
ICARE Software Components for Rapidly Developing Multimodal Interfaces
ICARE Software Components for Rapidly Developing Multimodal Interfaces Jullien Bouchet CLIPS-IMAG 38000, Grenoble, France Jullien.Bouchet@imag.fr Laurence Nigay CLIPS-IMAG 38000, Grenoble, France Laurence.Nigay@imag.fr
More informationOne for All and All in One
One for All and All in One A learner modelling server in a multi-agent platform Isabel Machado 1, Alexandre Martins 2 and Ana Paiva 2 1 INESC, Rua Alves Redol 9, 1000 Lisboa, Portugal 2 IST and INESC,
More informationImplementation of hybrid software architecture for Artificial Intelligence System
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.1, January 2007 35 Implementation of hybrid software architecture for Artificial Intelligence System B.Vinayagasundaram and
More informationAn explicit model for tailor-made ecommerce web presentations
An explicit model for tailor-made ecommerce web presentations S. G. Loeber 1,L.M.Aroyo 1, L. Hardman 2 1 TU/e, Computer Science, P.O. Box 513, 5600 MB Eindhoven, The Netherlands, telephone:+31.40.247.5154,
More informationChapter 7 Application Protocol Reference Architecture
Application Protocol Reference Architecture Chapter 7 Application Protocol Reference Architecture This chapter proposes an alternative reference architecture for application protocols. The proposed reference
More informationThe Role of Computers in Synchronous Collaborative Design
The Role of Computers in Synchronous Collaborative Design Wassim M. Jabi, The University of Michigan Theodore W. Hall, Chinese University of Hong Kong Abstract In this paper we discuss the role of computers
More informationComponent-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3
Component-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3 1 Mälardalen University, Västerås, Sweden, ivica.crnkovic@mdh.se 2 ABB Corporate Research,
More informationIntroduction to Service Oriented Architectures (SOA)
Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) http://www.eu-orchestra.org - Version from: 26.10.2007 1 Content 1. Introduction
More informationTraining Management System for Aircraft Engineering: indexing and retrieval of Corporate Learning Object
Training Management System for Aircraft Engineering: indexing and retrieval of Corporate Learning Object Anne Monceaux 1, Joanna Guss 1 1 EADS-CCR, Centreda 1, 4 Avenue Didier Daurat 31700 Blagnac France
More information2 AIMS: an Agent-based Intelligent Tool for Informational Support
Aroyo, L. & Dicheva, D. (2000). Domain and user knowledge in a web-based courseware engineering course, knowledge-based software engineering. In T. Hruska, M. Hashimoto (Eds.) Joint Conference knowledge-based
More informationIntelligent Human Machine Interface Design for Advanced Product Life Cycle Management Systems
Intelligent Human Machine Interface Design for Advanced Product Life Cycle Management Systems Zeeshan Ahmed Vienna University of Technology Getreidemarkt 9/307, 1060 Vienna Austria Email: zeeshan.ahmed@tuwien.ac.at
More informationUnderstanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective
Understanding and Supporting Intersubjective Meaning Making in Socio-Technical Systems: A Cognitive Psychology Perspective Sebastian Dennerlein Institute for Psychology, University of Graz, Universitätsplatz
More informationCoVitesse: A Groupware Interface for Collaborative Navigation on the WWW
CoVitesse: A Groupware Interface for Collaborative Navigation on the WWW Yann Laurillau Laurence Nigay CLIPS-IMAG Laboratory, University of Grenoble Domaine Universitaire, BP 53, 38041 Grenoble Cedex 9,
More informationDevelopment models. 1 Introduction. 2 Analyzing development models. R. Kuiper and E.J. Luit
Development models R. Kuiper and E.J. Luit 1 Introduction We reconsider the classical development models: the Waterfall Model [Bo76], the V-Model [Ro86], the Spiral Model [Bo88], together with the further
More informationSemantic Search in Portals using Ontologies
Semantic Search in Portals using Ontologies Wallace Anacleto Pinheiro Ana Maria de C. Moura Military Institute of Engineering - IME/RJ Department of Computer Engineering - Rio de Janeiro - Brazil [awallace,anamoura]@de9.ime.eb.br
More informationAn Engagement Model for Learning: Providing a Framework to Identify Technology Services
Interdisciplinary Journal of Knowledge and Learning Objects Volume 3, 2007 An Engagement Model for Learning: Providing a Framework to Identify Technology Services I.T. Hawryszkiewycz Department of Information
More informationSoftware Engineering Reference Framework
Software Engineering Reference Framework Michel Chaudron, Jan Friso Groote, Kees van Hee, Kees Hemerik, Lou Somers, Tom Verhoeff. Department of Mathematics and Computer Science Eindhoven University of
More informationInteractive Graphic Design Using Automatic Presentation Knowledge
Interactive Graphic Design Using Automatic Presentation Knowledge Steven F. Roth, John Kolojejchick, Joe Mattis, Jade Goldstein School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213
More informationA Pattern-based Framework of Change Operators for Ontology Evolution
A Pattern-based Framework of Change Operators for Ontology Evolution Muhammad Javed 1, Yalemisew M. Abgaz 2, Claus Pahl 3 Centre for Next Generation Localization (CNGL), School of Computing, Dublin City
More informationCGMB 123 MULTIMEDIA APPLICATION DEVELOPMENT
CGMB 123 MULTIMEDIA APPLICATION DEVELOPMENT Chapter 7 Instructional System Design Models T.J.Iskandar Abd. Aziz tjiskandar@uniten.edu.my Part I Instructional Design Terms & Definition Objectives 3 Upon
More informationSchool of Computer Science
School of Computer Science Computer Science - Honours Level - 2014/15 October 2014 General degree students wishing to enter 3000- level modules and non- graduating students wishing to enter 3000- level
More informationA Case-Based Approach to Integrating an Information Technology Curriculum
A Case-Based Approach to Integrating an Information Technology Curriculum Kathleen S. Hartzel 1 William E. Spangler Mordechai Gal-Or Trevor H. Jones A. J. Palumbo School of Business Administration Duquesne
More informationQuestions? Assignment. Techniques for Gathering Requirements. Gathering and Analysing Requirements
Questions? Assignment Why is proper project management important? What is goal of domain analysis? What is the difference between functional and non- functional requirements? Why is it important for requirements
More informationComponent visualization methods for large legacy software in C/C++
Annales Mathematicae et Informaticae 44 (2015) pp. 23 33 http://ami.ektf.hu Component visualization methods for large legacy software in C/C++ Máté Cserép a, Dániel Krupp b a Eötvös Loránd University mcserep@caesar.elte.hu
More informationBusiness Modeling with UML
Business Modeling with UML Hans-Erik Eriksson and Magnus Penker, Open Training Hans-Erik In order to keep up and be competitive, all companies Ericsson is and enterprises must assess the quality of their
More informationMensch-Maschine-Interaktion 1. Chapter 8 (June 21st, 2012, 9am-12pm): Implementing Interactive Systems
Mensch-Maschine-Interaktion 1 Chapter 8 (June 21st, 2012, 9am-12pm): Implementing Interactive Systems 1 Overview Introduction Basic HCI Principles (1) Basic HCI Principles (2) User Research & Requirements
More informationWhat is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
More informationOn Intuitive Dialogue-based Communication and Instinctive Dialogue Initiative
On Intuitive Dialogue-based Communication and Instinctive Dialogue Initiative Daniel Sonntag German Research Center for Artificial Intelligence 66123 Saarbrücken, Germany sonntag@dfki.de Introduction AI
More informationProgram Visualization for Programming Education Case of Jeliot 3
Program Visualization for Programming Education Case of Jeliot 3 Roman Bednarik, Andrés Moreno, Niko Myller Department of Computer Science University of Joensuu firstname.lastname@cs.joensuu.fi Abstract:
More informationChapter 11. HCI Development Methodology
Chapter 11 HCI Development Methodology HCI: Developing Effective Organizational Information Systems Dov Te eni Jane Carey Ping Zhang HCI Development Methodology Roadmap Context Foundation Application 1
More informationSEMANTIC-BASED AUTHORING OF TECHNICAL DOCUMENTATION
SEMANTIC-BASED AUTHORING OF TECHNICAL DOCUMENTATION R Setchi, Cardiff University, UK, Setchi@cf.ac.uk N Lagos, Cardiff University, UK, LagosN@cf.ac.uk ABSTRACT Authoring of technical documentation is a
More informationONTOLOGY BASED FEEDBACK GENERATION IN DESIGN- ORIENTED E-LEARNING SYSTEMS
ONTOLOGY BASED FEEDBACK GENERATION IN DESIGN- ORIENTED E-LEARNING SYSTEMS Harrie Passier and Johan Jeuring Faculty of Computer Science, Open University of the Netherlands Valkenburgerweg 177, 6419 AT Heerlen,
More informationDocument Engineering: Analyzing and Designing the Semantics of Business Service Networks
Document Engineering: Analyzing and Designing the Semantics of Business Service Networks Dr. Robert J. Glushko University of California Berkeley glushko@sims.berkeley.edu Tim McGrath Universal Business
More informationProject VIDE Challenges of Executable Modelling of Business Applications
Project VIDE Challenges of Executable Modelling of Business Applications Radoslaw Adamus *, Grzegorz Falda *, Piotr Habela *, Krzysztof Kaczmarski #*, Krzysztof Stencel *+, Kazimierz Subieta * * Polish-Japanese
More informationOn the use of the multimodal clues in observed human behavior for the modeling of agent cooperative behavior
From: AAAI Technical Report WS-02-03. Compilation copyright 2002, AAAI (www.aaai.org). All rights reserved. On the use of the multimodal clues in observed human behavior for the modeling of agent cooperative
More informationA Multi-agent System for Knowledge Management based on the Implicit Culture Framework
A Multi-agent System for Knowledge Management based on the Implicit Culture Framework Enrico Blanzieri Paolo Giorgini Fausto Giunchiglia Claudio Zanoni Department of Information and Communication Technology
More informationAutomatic Timeline Construction For Computer Forensics Purposes
Automatic Timeline Construction For Computer Forensics Purposes Yoan Chabot, Aurélie Bertaux, Christophe Nicolle and Tahar Kechadi CheckSem Team, Laboratoire Le2i, UMR CNRS 6306 Faculté des sciences Mirande,
More informationJAVA-BASED FRAMEWORK FOR REMOTE ACCESS TO LABORATORY EXPERIMENTS. Department of Electrical Engineering University of Hagen D-58084 Hagen, Germany
JAVA-BASED FRAMEWORK FOR REMOTE ACCESS TO LABORATORY EXPERIMENTS Christof Röhrig, 1 Andreas Jochheim 2 Department of Electrical Engineering University of Hagen D-58084 Hagen, Germany Abstract: This paper
More informationCurricula of HCI and Computer Graphics: From theory to practice
Sixth LACCEI International Latin American and Caribbean Conference for Engineering and Technology (LACCEI 2008) Partnering to Success: Engineering, Education, Research and Development June 4 June 6 2008,
More informationPARCC TECHNOLOGY ARCHITECTURE ARCHITECTURAL PRINCIPLES AND CONSTRAINTS SUMMARY
PARCC TECHNOLOGY ARCHITECTURE ARCHITECTURAL PRINCIPLES AND CONSTRAINTS SUMMARY Version 1.1 November 5, 2012 Architectural Principles and Constraints Summary REVISION HISTORY The following revision chart
More informationService Identifier Comparison module Service Rule Comparison module Favourite Application Server Reinvocation Management module
Service Broker for Managing Feature Interactions in IP Multimedia Subsystem Anahita Gouya, Noël Crespi {anahita.gouya, noel.crespi @int-evry.fr}, Institut National des télécommunications (GET-INT) Mobile
More informationReusable Knowledge-based Components for Building Software. Applications: A Knowledge Modelling Approach
Reusable Knowledge-based Components for Building Software Applications: A Knowledge Modelling Approach Martin Molina, Jose L. Sierra, Jose Cuena Department of Artificial Intelligence, Technical University
More informationModeling the User Interface of Web Applications with UML
Modeling the User Interface of Web Applications with UML Rolf Hennicker,Nora Koch,2 Institute of Computer Science Ludwig-Maximilians-University Munich Oettingenstr. 67 80538 München, Germany {kochn,hennicke}@informatik.uni-muenchen.de
More informationMeta-Model specification V2 D602.012
PROPRIETARY RIGHTS STATEMENT THIS DOCUMENT CONTAINS INFORMATION, WHICH IS PROPRIETARY TO THE CRYSTAL CONSORTIUM. NEITHER THIS DOCUMENT NOR THE INFORMATION CONTAINED HEREIN SHALL BE USED, DUPLICATED OR
More informationNew Generation of Software Development
New Generation of Software Development Terry Hon University of British Columbia 201-2366 Main Mall Vancouver B.C. V6T 1Z4 tyehon@cs.ubc.ca ABSTRACT In this paper, I present a picture of what software development
More informationA Framework for Software Product Line Engineering
Günter Böckle Klaus Pohl Frank van der Linden 2 A Framework for Software Product Line Engineering In this chapter you will learn: o The principles of software product line subsumed by our software product
More informationUtilising Ontology-based Modelling for Learning Content Management
Utilising -based Modelling for Learning Content Management Claus Pahl, Muhammad Javed, Yalemisew M. Abgaz Centre for Next Generation Localization (CNGL), School of Computing, Dublin City University, Dublin
More information2011 Springer-Verlag Berlin Heidelberg
This document is published in: Novais, P. et al. (eds.) (2011). Ambient Intelligence - Software and Applications: 2nd International Symposium on Ambient Intelligence (ISAmI 2011). (Advances in Intelligent
More informationMasters in Information Technology
Computer - Information Technology MSc & MPhil - 2015/6 - July 2015 Masters in Information Technology Programme Requirements Taught Element, and PG Diploma in Information Technology: 120 credits: IS5101
More informationInformation Services for Smart Grids
Smart Grid and Renewable Energy, 2009, 8 12 Published Online September 2009 (http://www.scirp.org/journal/sgre/). ABSTRACT Interconnected and integrated electrical power systems, by their very dynamic
More informationHow To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
More informationHELP DESK SYSTEMS. Using CaseBased Reasoning
HELP DESK SYSTEMS Using CaseBased Reasoning Topics Covered Today What is Help-Desk? Components of HelpDesk Systems Types Of HelpDesk Systems Used Need for CBR in HelpDesk Systems GE Helpdesk using ReMind
More information10g versions followed on separate paths due to different approaches, but mainly due to differences in technology that were known to be huge.
Oracle BPM 11g Platform Analysis May 2010 I was privileged to be invited to participate in "EMEA BPM 11g beta bootcamp" in April 2010, where I had close contact with the latest release of Oracle BPM 11g.
More informationApplication Design: Issues in Expert System Architecture. Harry C. Reinstein Janice S. Aikins
Application Design: Issues in Expert System Architecture Harry C. Reinstein Janice S. Aikins IBM Scientific Center 15 30 Page Mill Road P. 0. Box 10500 Palo Alto, Ca. 94 304 USA ABSTRACT We describe an
More informationA Review of an MVC Framework based Software Development
, pp. 213-220 http://dx.doi.org/10.14257/ijseia.2014.8.10.19 A Review of an MVC Framework based Software Development Ronnie D. Caytiles and Sunguk Lee * Department of Multimedia Engineering, Hannam University
More informationUmbrella: A New Component-Based Software Development Model
2009 International Conference on Computer Engineering and Applications IPCSIT vol.2 (2011) (2011) IACSIT Press, Singapore Umbrella: A New Component-Based Software Development Model Anurag Dixit and P.C.
More informationEvaluating OO-CASE tools: OO research meets practice
Evaluating OO-CASE tools: OO research meets practice Danny Greefhorst, Matthijs Maat, Rob Maijers {greefhorst, maat, maijers}@serc.nl Software Engineering Research Centre - SERC PO Box 424 3500 AK Utrecht
More informationSafe Automotive software architecture (SAFE) WP3 Deliverable D3.6.b: Safety Code Generator Specification
Contract number: ITEA2 10039 Safe Automotive software architecture (SAFE) ITEA Roadmap application domains: Major: Services, Systems & Software Creation Minor: Society ITEA Roadmap technology categories:
More informationSpecialty Answering Service. All rights reserved.
0 Contents 1 Introduction... 2 1.1 Types of Dialog Systems... 2 2 Dialog Systems in Contact Centers... 4 2.1 Automated Call Centers... 4 3 History... 3 4 Designing Interactive Dialogs with Structured Data...
More informationHow To Evaluate Web Applications
A Framework for Exploiting Conceptual Modeling in the Evaluation of Web Application Quality Pier Luca Lanzi, Maristella Matera, Andrea Maurino Dipartimento di Elettronica e Informazione, Politecnico di
More informationSTORE VIEW: Pervasive RFID & Indoor Navigation based Retail Inventory Management
STORE VIEW: Pervasive RFID & Indoor Navigation based Retail Inventory Management Anna Carreras Tànger, 122-140. anna.carrerasc@upf.edu Marc Morenza-Cinos Barcelona, SPAIN marc.morenza@upf.edu Rafael Pous
More informationAgent-oriented Modeling for Collaborative Learning Environments: A Peer-to-Peer Helpdesk Case Study
Agent-oriented Modeling for Collaborative Learning Environments: A Peer-to-Peer Helpdesk Case Study Renata S. S. Guizzardi 1, Gerd Wagner 2 and Lora Aroyo 1 1 Computer Science Department University of
More informationRobustness of a Spoken Dialogue Interface for a Personal Assistant
Robustness of a Spoken Dialogue Interface for a Personal Assistant Anna Wong, Anh Nguyen and Wayne Wobcke School of Computer Science and Engineering University of New South Wales Sydney NSW 22, Australia
More informationDavid Robins Louisiana State University, School of Library and Information Science. Abstract
,QIRUPLQJ 6FLHQFH 6SHFLDO,VVXH RQ,QIRUPDWLRQ 6FLHQFH 5HVHDUFK 9ROXPH 1R Interactive Information Retrieval: Context and Basic Notions David Robins Louisiana State University, School of Library and Information
More informationRequirements engineering
Learning Unit 2 Requirements engineering Contents Introduction............................................... 21 2.1 Important concepts........................................ 21 2.1.1 Stakeholders and
More informationArchitecture Design & Sequence Diagram. Week 7
Architecture Design & Sequence Diagram Week 7 Announcement Reminder Midterm I: 1:00 1:50 pm Wednesday 23 rd March Ch. 1, 2, 3 and 26.5 Hour 1, 6, 7 and 19 (pp.331 335) Multiple choice Agenda (Lecture)
More informationSEARCH The National Consortium for Justice Information and Statistics. Model-driven Development of NIEM Information Exchange Package Documentation
Technical Brief April 2011 The National Consortium for Justice Information and Statistics Model-driven Development of NIEM Information Exchange Package Documentation By Andrew Owen and Scott Came Since
More informationTo Virtualize or Not? The Importance of Physical and Virtual Components in Augmented Reality Board Games
To Virtualize or Not? The Importance of Physical and Virtual Components in Augmented Reality Board Games Jessica Ip and Jeremy Cooperstock, Centre for Intelligent Machines, McGill University, Montreal,
More informationVARIABILITY MODELING FOR CUSTOMIZABLE SAAS APPLICATIONS
VARIABILITY MODELING FOR CUSTOMIZABLE SAAS APPLICATIONS Ashraf A. Shahin 1, 2 1 College of Computer and Information Sciences, Al Imam Mohammad Ibn Saud Islamic University (IMSIU) Riyadh, Kingdom of Saudi
More informationApplying 4+1 View Architecture with UML 2. White Paper
Applying 4+1 View Architecture with UML 2 White Paper Copyright 2007 FCGSS, all rights reserved. www.fcgss.com Introduction Unified Modeling Language (UML) has been available since 1997, and UML 2 was
More informationDATA QUALITY DATA BASE QUALITY INFORMATION SYSTEM QUALITY
DATA QUALITY DATA BASE QUALITY INFORMATION SYSTEM QUALITY The content of those documents are the exclusive property of REVER. The aim of those documents is to provide information and should, in no case,
More informationelearning Instructional Design Guidelines Ministry of Labour
elearning Instructional Design Guidelines Ministry of Labour Queen s Printer for Ontario ISBN 978-1-4606-4885-8 (PDF) ISBN 978-1-4606-4884-1 (HTML) December 2014 1 Disclaimer This elearning Instructional
More information1.. This UI allows the performance of the business process, for instance, on an ecommerce system buy a book.
* ** Today s organization increasingly prompted to integrate their business processes and to automate the largest portion possible of them. A common term used to reflect the automation of these processes
More informationFrom Control Loops to Software
CNRS-VERIMAG Grenoble, France October 2006 Executive Summary Embedded systems realization of control systems by computers Computers are the major medium for realizing controllers There is a gap between
More informationIntegrating Databases, Objects and the World-Wide Web for Collaboration in Architectural Design
Integrating Databases, Objects and the World-Wide Web for Collaboration in Architectural Design Wassim Jabi, Assistant Professor Department of Architecture University at Buffalo, State University of New
More informationCS 565 Business Process & Workflow Management Systems
CS 565 Business Process & Workflow Management Systems Professor & Researcher Department of Computer Science, University of Crete & ICS-FORTH E-mail: dp@csd.uoc.gr, kritikos@ics.forth.gr Office: K.307,
More informationSOA: The missing link between Enterprise Architecture and Solution Architecture
SOA: The missing link between Enterprise Architecture and Solution Architecture Jaidip Banerjee and Sohel Aziz Enterprise Architecture (EA) is increasingly being acknowledged as the way to maximize existing
More informationGeneralizing Email Messages Digests
Generalizing Email Messages Digests Romain Vuillemot Université de Lyon, CNRS INSA-Lyon, LIRIS, UMR5205 F-69621 Villeurbanne, France romain.vuillemot@insa-lyon.fr Jean-Marc Petit Université de Lyon, CNRS
More informationRevel8or: Model Driven Capacity Planning Tool Suite
Revel8or: Model Driven Capacity Planning Tool Suite Liming Zhu 1,2, Yan Liu 1,2, Ngoc Bao Bui 1,2,Ian Gorton 3 1 Empirical Software Engineering Program, National ICT Australia Ltd. 2 School of Computer
More informationUnit 1 Learning Objectives
Fundamentals: Software Engineering Dr. Rami Bahsoon School of Computer Science The University Of Birmingham r.bahsoon@cs.bham.ac.uk www.cs.bham.ac.uk/~rzb Office 112 Y9- Computer Science Unit 1. Introduction
More informationElite: A New Component-Based Software Development Model
Elite: A New Component-Based Software Development Model Lata Nautiyal Umesh Kumar Tiwari Sushil Chandra Dimri Shivani Bahuguna Assistant Professor- Assistant Professor- Professor- Assistant Professor-
More informationService Level Agreements based on Business Process Modeling
Service Level Agreements based on Business Process Modeling Holger Schmidt Munich Network Management Team University of Munich, Dept. of CS Oettingenstr. 67, 80538 Munich, Germany Email: schmidt@informatik.uni-muenchen.de
More informationCHAPTER 7 Expected Outcomes
CHAPTER 7 SYSTEM DESIGN Expected Outcomes Able to know database design Able to understand designing form and report Able to know designing interfaces System Design A process of transforming from logical
More informationA Framework for Integrating Software Usability into Software Development Process
A Framework for Integrating Software Usability into Software Development Process Hayat Dino AFRICOM Technologies, Addis Ababa, Ethiopia hayudb@gmail.com Rahel Bekele School of Information Science, Addis
More informationSocial Team Characteristics and Architectural Decisions: a Goal-oriented Approach
Social Team Characteristics and Architectural Decisions: a Goal-oriented Approach Johannes Meißner 1 and Frederik Schulz 2 1 Research and Development, SK8DLX Services GmbH, Jena, Germany, johannes.meissner@sk8dlx.de
More informationBENEFIT OF DYNAMIC USE CASES TO EARLY DESIGN A DRIVING ASSISTANCE SYSTEM FOR PEDESTRIAN/TRUCK COLLISION AVOIDANCE
BENEFIT OF DYNAMIC USE CASES TO EARLY DESIGN A DRIVING ASSISTANCE SYSTEM FOR PEDESTRIAN/TRUCK COLLISION AVOIDANCE Hélène Tattegrain, Arnaud Bonnard, Benoit Mathern, LESCOT, INRETS France Paper Number 09-0489
More informationcontext sensitive virtual sales agents
paper context sensitive virtual sales agents K. Kamyab, F. Guerin, P. Goulev, E. Mamdani Intelligent and Interactive Systems Group, Imperial College, London k.kamyab, f.guerin, p.goulev, e.mamdani@ic.ac.uk
More informationThe Big Data methodology in computer vision systems
The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of
More informationDistributed Database for Environmental Data Integration
Distributed Database for Environmental Data Integration A. Amato', V. Di Lecce2, and V. Piuri 3 II Engineering Faculty of Politecnico di Bari - Italy 2 DIASS, Politecnico di Bari, Italy 3Dept Information
More informationInterface Design Rules
Interface Design Rules HCI Lecture 10 David Aspinall Informatics, University of Edinburgh 23rd October 2007 Outline Principles and Guidelines Learnability Flexibility Robustness Other Guidelines Golden
More informationMasters in Human Computer Interaction
Masters in Human Computer Interaction Programme Requirements Taught Element, and PG Diploma in Human Computer Interaction: 120 credits: IS5101 CS5001 CS5040 CS5041 CS5042 or CS5044 up to 30 credits from
More informationHow do non-expert users exploit simultaneous inputs in multimodal interaction?
How do non-expert users exploit simultaneous inputs in multimodal interaction? Knut Kvale, John Rugelbak and Ingunn Amdal 1 Telenor R&D, Norway knut.kvale@telenor.com, john.rugelbak@telenor.com, ingunn.amdal@tele.ntnu.no
More informationToward a Behavioral Decomposition for Context-awareness and Continuity of Services
Toward a Behavioral Decomposition for Context-awareness and Continuity of Services Nicolas Ferry and Stéphane Lavirotte and Jean-Yves Tigli and Gaëtan Rey and Michel Riveill Abstract Many adaptative context-aware
More information17 Collaborative Software Architecting through Knowledge Sharing
17 Collaborative Software Architecting through Knowledge Sharing Peng Liang, Anton Jansen, Paris Avgeriou Abstract: In the field of software architecture, there has been a paradigm shift from describing
More informationSoftware Engineering. What is a system?
What is a system? Software Engineering Software Processes A purposeful collection of inter-related components working together to achieve some common objective. A system may include software, mechanical,
More informationRun-time Variability Issues in Software Product Lines
Run-time Variability Issues in Software Product Lines Alexandre Bragança 1 and Ricardo J. Machado 2 1 Dep. I&D, I2S Informática Sistemas e Serviços SA, Porto, Portugal, alexandre.braganca@i2s.pt 2 Dep.
More informationContext Capture in Software Development
Context Capture in Software Development Bruno Antunes, Francisco Correia and Paulo Gomes Knowledge and Intelligent Systems Laboratory Cognitive and Media Systems Group Centre for Informatics and Systems
More informationDesign of an Interface for Technology Supported Collaborative Learning the RAFT Approach
Design of an Interface for Technology Supported Collaborative Learning the RAFT Approach Lucia Terrenghi 1, Marcus Specht 1, Moritz Stefaner 2 1 Fraunhofer FIT, Institute for Applied Information Technology,
More informationMasters in Computing and Information Technology
Masters in Computing and Information Technology Programme Requirements Taught Element, and PG Diploma in Computing and Information Technology: 120 credits: IS5101 CS5001 or CS5002 CS5003 up to 30 credits
More informationMoving from EAI to SOA An Infosys Perspective
Moving from EAI to SOA An Infosys Perspective Manas Kumar Sarkar Over years traditional Enterprise Application Integration (EAI) has provided its benefits in terms of solution re-use, application decoupling
More information