Eclipse-IT 2010 The Fifth Workshop of the Italian Eclipse Community September 30 th and October 1 st, Savona, Italy Proceedings

Size: px
Start display at page:

Download "Eclipse-IT 2010 The Fifth Workshop of the Italian Eclipse Community September 30 th and October 1 st, 2010 - Savona, Italy Proceedings"




3 Eclipse-IT 2010 The Fifth Workshop of the Italian Eclipse Community September 30 th and October 1 st, Savona, Italy Proceedings Savona University Campus, University of Genova Via A. Magliotto, Savona, Italy Abstract. The Eclipse Integrated Development Environment is a widely used platform for the development of Object Oriented Applications. Besides, Eclipse is an open source community and ecosystem whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across its lifecycle. The Eclipse-IT 2010 workshop is the fifth yearly meeting of the Eclipse Italian Community which includes both Universities and Industries, with the aim of joining researchers and practitioners, students and professionals, around the common interest in experimenting, extending, and supporting the Eclipse platform. Due to a special interest and running project, this year, the Conference hosts a special session on the Jazz cooperative development environment, based on the Enforcing Team Cooperation (ETC) Project. ETC involves many universities with the aim to bring together teams of students in order to develop software in a cooperative way. Based on the success of the previous editions, the Eclipse-IT 2010 will host four tracks (regular, student, industrial, and JAZZ applications) and three types of contributions are accepted: technical papers, student demo papers, experience in public institutions (P.A.) or industrial demos.

4 Organized by: DIST - Department of Computer, Communication, and Systems Sceince, University of Genoa IBM Eclipse Italian Community Sponsored by: IBM Eclipse Community S.P.E.S. S.c.p.a. Seen Solution Shinystat s.r.l.


6 Published by: Eclipse Italian community Editor: Mauro Coccoli ISBN:

7 Workshop coordinator: Mauro Coccoli, Università di Genova Workshop Chair: Student track Chair: Industry track Chair: Workshop Co- Chairs: Giovanni Adorni, Università di Genova Gianni Vercelli, Università di Genova Domenico Squillace, IBM Mauro Coccoli, Università di Genova and Paolo Maresca, Università Federico II Napoli Program committee: The Program Committee is composed by both academics and experts: Giovanni Adorni Marco Aimar Marco Brambilla Andrea Calvagna Fabio Calefato Mauro Coccoli Andrea De Lucia Giacomo Franco Rosario Gangemi Angelo Gargantini Ferdinando Gorga Filippo Lanubile Paolo Maresca Enrico Oliva Patrizio Pelliccione Elvinia Riccobene Patrizia Scandurra Giuseppe Scanniello Vittorio Scarano Carmine Seraponte Domenico Squillace Lidia Stanganelli Rodolfo Totaro Gianni Vercelli Università di Genova Opera21 S.p.A. Politecnico di Milano Università di Catania Università di Bari Università di Genova Università di Salerno IBM IBM Università di Bergamo IBM Università di Bari Università Federico II Napoli S.E.I.C. s.r.l. Università di L Aquila Università di Milano Università di Bergamo Università di Basilicata Università di Salerno Opera21 S.p.A. IBM Università di Genova Energeya Italia s.r.l. Università di Genova

8 Table of Contents Regular Papers 1 Collaborative Mind Maps Furio Belgiorno, Delfina Malandrino, Ilaria Manno, Giuseppina Palmieri, Donato Pirozzi and Vittorio Scarano 2 An Eclipse-based IDE for Featherweight Java implemented in Xtext Lorenzo Bettini 14 Using Domain Specific Languages for platform-based software development:the case of Android Antonio Natali and Ambra Molesini 29 Design and Development of an Extensible Test Generation Tool based on the Eclipse Rich Client Platform Angelo Gargantini and Gordon Fraser 41 A probabilistic approach for choosing the best licence in the Eclipse community Pierpaolo Di Bitonto, Paolo Maresca, Veronica Rossano, Teresa Roselli, Maria Laterza and Lidia Stanganelli 53 Eclipse Jazz 65 Adding collaboration into Rational Team Concert Furio Belgiorno, Ilaria Manno, Giuseppina Palmieri and Vittorio Scarano 66 Apprendimento collaborativo fra team universitari in progetti didattici mediante l'uso di RTC Paolo Maresca and Lidia Stanganelli 78 Enforcing Team Cooperation Using Rational Software Tools into Software Engineering Academic Projects Mauro Coccoli, Paolo Maresca and Lidia Stanganelli 90 On the downscaling of the Jazz platform Experimenting the Jazz RTC platform in a teaching course Angelo Gargantini, Guido Salvaneschi and Patrizia Scandurra 104 Enhancing team cooperation through building innovative teaching resources: the ETC_DOC project Paolo Maresca, Giuseppe Marco Scarfogliero and Lidia Stanganelli 116

9 Enforcing Team Cooperation using Rational software tools: merging universities and IBM effort together Ferdinando Gorga, Paolo Maresca, Carla Milani and Giorgio Galli 126 Student Papers 138 An Eclipse Plug-in for Code Search using Full-text Information Retrieval Engine Andrejs Jermakovics and Francesco Di Cerbo 139 Collaborative GeoGebra Emidio Bianco, Ilaria Manno and Donato Pirozzi 143 Enforcing Team Cooperation project: student opinion Lidia Stanganelli and Diego Brondo 147 Industrial Demos 151 Eclipse Equinox: the adoption of the OSGi standard in Enterprise solutions Antonietta Miele 152 FishStatJ, an Eclipse-RCP based application for statistical data mining and analysis Fabrizio Sibeni and Francesco Calderini 153 Rich Client Application framework based on the Eclipse platform Paolo Giardiello 154 Un sistema di monitoraggio ambientale realizzato con Eclipse Alessandro Burastero, Paolo Campanella, Fabio Pintus, Cosimo Versace and Adriano Fedi 157 Eclipse RCP come piattaforma di integrazione Vincenzo Caselli and Francesco Guidieri 158 Configuration, Change & Release Management for internal and external development with Rational Team Concert Alessandro Moro and Michele Pegoraro 159 Processo di sviluppo in SECSERVIZI: la metodologia JMC Francesco Ancona 160


11 Regular Papers Chair: Giovanni Adorni Dipartimento di Informatica, Sistemistica e Telematica, Università di Genova 1

12 Collaborative Mind Maps Furio Belgiorno, Delfina Malandrino, Ilaria Manno, Giuseppina Palmieri, Donato Pirozzi, and Vittorio Scarano ISISLab Dipartimento di Informatica ed Applicazioni R.M. Capocelli Università di Salerno, Fisciano (SA), Italy Abstract. In the field of Computer Supported Collaborative Work a growing interest is going towards the introduction of collaboration features into existing single-user applications. The key points to make collaborative an existing singleuser application are its extendibility and composability: these properties allow to extend the application functionalities (to add collaborative features) and to assemble or compose that application with other existing software components (for example frameworks that support communication and collaboration facilities). Given these consideration, we believe that the Eclipse architecture can provide a strong support to introduce collaboration in a single-user application. To this aim, we present CollabXMind, a collaborative mind-mapping application, built by adding collaboration functionalities to XMind, a single-user Eclipse-based application designed to create mind maps. 1 Introduction Computer Supported Collaborative Work is a research area where the software (architectures, methods and techniques) that allows and facilitate collaboration among users is studied. In this context, one of the interesting directions of the research is how it is possible to inject collaboration into existing single-user applications, beyond the functionalities provided by the application itself. One of the key advantages of this technique is the user-friendliness: users, already familiar with the single-user version of the software, have only to learn the (few) additional collaborative features to be able to embed the new collaboration software into their work practice. In fact, the implementation from scratch of collaborative versions of existing singleuser applications is not always feasible: users are, often, reluctant to abandon their preferred and well-established application. On the other hand, collaborative features should be, as much as possible, integrated in the application, to avoid the overhead of the management of the collaboration beyond the standard users activities. The implementation from scratch of collaborative systems raises also another issue: for each system the designer should face similar problems related to common collaborative functionalities. This aspect is presented in [1], where the authors present a classification framework of the users needs in collaborative software. They classify collaboration needs into basic needs, enhanced needs and comfort needs for collaboration. Furthermore, they observe that most systems (re)implement collaborative functionalities by ensuring the 2

13 basic needs, and the implementation of the functionalities that ensures the enhanced needs tends to re-implement the basic functionalities. In fact, for most of the systems basic features were created first, followed by enhanced and eventually comfort features. As a consequence, a multitude of systems provide basic features in response to the lower collaboration needs, with few that face enhanced or comfort collaboration needs and, finally, with scarce re-use of existing systems that just provide basic functionalities. The achievement of the comfort functionalities can be helped by designing collaboration services by leveraging on existing collaboration functionalities and applications, so that the design and the implementation can focus on new aspects and problems. The idea, therefore, is to introduce collaboration functionalities in existing singleuser applications by using existing frameworks that are able to support and enhance the development of such functionalities. In literature several examples exist about the possibility of making collaborative a single-user application. Specifically, an example of use of a toolkit to introduce collaboration in existing application is presented in [2], where the authors present DistEdit, a toolkit that is able to convert existing single-user editors (i.e., MicroEmacs and GNUEmacs) into collaborative ones. In this case changes on the source code of the original application are required. Another approach is presented as Shared Windows Systems (also known as Collaboration Transparency Systems) in [3]; in these systems the collaboration features are introduced at the operating system level and are made available to any single-user application. Some example of this kind of approach are SharedX [4], Microsoft Meeting Space [5] and SunForum [6]. This approach does not require changes on the source code of single-user applications, but it is highly dependent on the underlying operating system, so it not suitable for multi-platform applications. The Component Replacement approach, used in FlexibleJAMM [7], shifts the focus from the operating system level to the application level, introducing collaboration features by replacing selected components of the application user interface with collaborative ones. This approach requires accessing the source code even with no changes to it. The Transparent Adaptation approach, used in CoWord and CoPowerPoint [8, 9], uses the APIs of the existing applications to introduce collaboration features. This approach does not require changes on the source code, but application APIs are needed to adequately intercept the input actions from users. The Component Mapping approach presented in [10] maps some user interface components to collaborative ones. This kind of approach requires changes on the source code of original applications. All these approaches present advantages and drawbacks, and each one seems answering to specific requirements (availability of source code, availability of application API, dependence on the OS, etc.). However, in general terms, the idea is to introduce collaboration functionalities in existing single-user applications, eventually re-using existing collaboration frameworks. Given this basic idea, the key points are the possibility to extend a single-user application with new functionalities (the collaborative features) and the possibility to compose the existing application with other software components (the frameworks supporting the collaboration). These considerations make the Eclipse environment particularly suitable to modify and combine existing (Eclipse-based) applications and frameworks to offer new functionalities. 3

14 The aim of this paper is to illustrate how the component-based architecture of the Eclipse Platform and the wide set of plug-ins available for the Eclipse ecosystem can support and enhance the process of making collaborative an (Eclipse-based) single-user application. In particular, we will describe how to introduce collaboration functionalities in XMind, an Eclipse-based single-user application to support the creation of mind maps. The rest of the paper is organized as follows. In Section 2 we introduce collaborative mind maps and the architecture and features of XMind. In Section 3 we describe the main functionalities of CollabXMind, the collaborative version of XMind, whose architecture and components are described in Section 4. Section 5 will conclude with some final remarks. 2 Collaborative mind maps A mind map [11] is a graph where each node represents an idea or a concept, and each link represents a semantic connection among them. Each idea can be enriched with images, hyperlinks to Web pages or other resources. Mind maps are useful for activities involving the organization of ideas, such as note-making, note-taking and brainstorming. Currently, several software systems support the creation of mind maps and the reorganization of ideas with drag-and-drop of the nodes. A quite exhaustive list of these kind of applications can be found on the Wikipedia Web site [12], while an experiment has been presented by Shih et al. in [13], in which they analyze the impact of collaborative mind mapping in generating ideas, confirming the advantage of using a collaborative mind mapping system. The work presented in this paper aims to design and develop a collaborative realtime mind mapping application, named CollabXMind, that enables multiple users to cooperate in parallel way on a shared mind map. CollabXMind leverages on XMind [14], a single-user Rich Client Application [15] for mind mapping, and on CAFE, part of CoFFEE suite [16], to provide functionalities to support communication and collaboration facilities. XMind is a single-user standalone mind mapping software system that enables the user to create his own mind-maps. XMind is based on the metaphor of a workbook that contains multiple sheets. The user can create his own mind-map on a sheet: around the central topic grows a graphic representing related ideas and concepts. The appearance of each item can be customized with icons, colors and so on. One of the most interesting feature is the opportunity to change the structure of interconnected ideas: the structure can represent a map, a tree, a logic chart, a fishbone or a spreadsheet. XMind, an open source project designed as an Eclipse-based application, has been named the Best Commercial Eclipse Rich Client Platform (RCP) Application in 2008 by the Eclipse community, and the Best Project for Academia in 2009 by the SourceForge community. As we will describe later, the Eclipse-based architecture of XMind has allowed us to design CollabXMind as a Rich Client Application to support multiple users to collaboratively create mind maps. CAFE, acronym of Collaborative Application Framework, the core of the CoFFEE suite, has been designed to develop advanced collaboration functionalities and to allow 4

15 an efficient integration with other CAFE-based tools (i.e., CoFFEE tools, or any other collaborative tool that uses CAFE as a communication framework). Both XMind and CAFE leverages on Eclipse, so they inherit the composability of the Eclipse architecture. This has allowed us to arrange them to achieve a collaborative version of XMind, without changes on the source code of the original application (XMind) and with advanced features exploited through the use of CAFE. A further remarkable result is that CollabXMind can be integrated with other collaborative tools based on CAFE. This ensures a wide set of collaborative tools which can be used together to support different collaboration tasks. 3 CollabXMind functionalities CollabXMind is a Rich Client Application which enables multiple users (in the same place at the same time) to share the same mind map. Each user (Participant) can contribute in real time on the shared map, while just the Coordinator can create the map and manage the policies and the interaction modes (described in details later). CollabXMind s user interface is similar to XMind mind mapping application and has some new component to support collaboration features. The user interface of the Participants (as shown in Fig. 2) offers the same functionalities of XMind, but each action is shown in real time to every Participant. The actions related to the creation/opening/import of collaborative workbooks, are not available to the Participant since are managed by the Coordinator. Further collaboration components added to user interfaces are the Chat tool and the Presence tool, for discussions support and team awareness, respectively. In Fig. 1 we show the user interface of the Coordinator, as it appears in CollabX- Mind. The classic menu and tool bars of XMind are shown on the top, while in the middle, there is a collaborative workbook with an example of a mind map created with contributions from all available users (the topic, in that map, is about the penetration of social media in our daily life). As shown in Fig. 1, the user interface components introduced in the Coordinator s application to support the collaboration are the Control Panel (on the left) and the Chat tool (in the top-right corner). The Control Panel provides a list of all connected Participants and allows to manages the Floor control (the blocking/unblocking of user interface of a specific user, a group of users or all Participants). The Chat tool supports discussion between users, for instance to allow meta-communication about the organization of the activities on the map. Other widgets are introduced near to the zoom control bar (at the bottom) to manage anonymity and the interaction modes. The interaction modes allow the Coordinator to define when and how users can work on the artifacts. They are related with concept of author of a contribution: in CollabXMind each node has an owner (the author) and this information is used to apply different policies. The interaction modes provided by CollabXMind are clone, view, owner editing and open editing. The Coordinator can change interaction modes of Participants at runtime during the collaborative activities. 5

16 Fig. 1. User interface of the Coordinator in the CollabXMind application. Fig. 2. User interface of the Participant in the CollabXMind application. In the clone interaction mode each Participant has exactly the same view (with the same map) of the Coordinator, but any action is disabled, including the navigation activities like zooming and bar scrolling. Only the Coordinator can perform actions 6

17 on the maps and these are reflected in real time on the Participants views (including navigation activities). In the view interaction mode, Participants can not perform actions on the map, but they have their navigation free: they can zoom or scroll the map on their own. The Coordinator has full access about changes into the map and his/her actions are shown in real time to all Participants. In the owner editing mode, the Participants have the possibility to add their contributions (i.e., nodes and links) on the map, allowing a synchronous collaboration among users. However, Participants are not allowed to change other people contributions: each one can contribute and eventually modify his own contribution, but can not delete or change (that is, perform actions such as editing, change of style, shape color, etc.) contribution of other users. This guarantees a minimum control over the users interactions. As shown in Fig. 2, in the owner editing mode, as visual cue, all nodes owned by the Participant on its interface are marked with an icon representing a man with a pencil, to distinguish from other nodes that cannot be edited. Finally, of course, there is an open editing mode where everybody is free to modify every aspect and content of the map. 4 CollabXMind architecture In this Section we describe the CollabXMind application by first introducing an overview of the architecture and then describing each component and the provided functionalities. As anticipated in Section 2, the CollabXMind application has been designed with the aim of making collaborative XMind, an open source single-user Rich Client Application. The goal is to add collaboration functionalities to it to allow multiple users to work concurrently on the same mind map. A strict requirement concerns the need to avoid any modification on the source code of the original application, so that to ensure an easier update with successive versions of the XMind software. The multiple instances of the CollabXMind applications are arranged in a clientserver architecture, where the input of each client is passed to the server which forwards it to all other available clients. The basic idea is to introduce collaboration functionalities by intercepting the users input in XMind, their actions on the mind map, and then pass this input to the framework providing communication functionalities. Fig. 3 shows the architecture of CollabXMind: there is a Client Component (the CollabXMind Client in Fig.3) and a Server Component (i.e., the CollabXMind Server in Fig.3). We will refer to them as Server Component and Client Component respectively, next in this section. However, both components, implemented as Rich Client Applications, leverage on XMind, on their own CollabXMind plug-in and on the CAFE framework. These components provide the following functionalities: the CAFE framework provides communication functionalities and advanced collaboration features; it is present both in the Server Component and in the Client Component the CollabXMind plug-in is responsible of intercepting the users input and to pass them to the CAFE framework and vice versa; it is present both in the Server Compo- 7

18 nent and in the Client Component, whereas on the server side, it is also responsible of applying policies and access control the CAFE tool plug-ins are responsible of connecting the CollabXMind plug-in with the the CAFE framework; they are present both in the Server Component and in the Client Component Fig. 3. The CollabXMind architecture. From the top to the bottom, both on the Server Component and on the Client Component: XMind, the CollabXMind plug-in (with a Policy controller on the Server component), the CAFE tool plug-ins (server plug-in and client plug-in, respectively) and the CAFE framework. 4.1 The Collaboration Framework (CAFE) The CAFE framework is the core of CoFFEE [16 18], a suite of Rich Client Applications designed to support collaborative learning in a face to face context. The architecture of CoFFEE is characterized by a server-side component and by a client-side component, the CoFFEE Controller and the CoFFEE Discusser, respectively. It provides a set of collaborative tools integrated in the environment to support a wide range of aims and contexts; a detailed description of CoFFEE and its tools is provided in [16, 18]. Finally, CoFFEE is available since July 2008 on SourceForge [17]. Technically, CAFE is an Eclipse plug-in that provides the communication functionalities by exploiting ECF (Eclipse Communication Framework) [19] as communication framework. Specifically, CAFE uses two ECF components, named Container and Shared Object: the Container provides access to the communication protocol (i.e., TCP/IP) and hosts the Shared Objects which provide APIs to send/receive arbitrary messages via the Container. Finally, each CAFE-based tool registers its specific Shared Object with the Container. 8

19 Beyond the communication functionalities, the CAFE framework implements a set of services to support and enhance the development of collaborative applications: it provides an automatic mechanism of Server Discovery to allow the clients to discover the server on the local area network. This functionality simplifies the initial phase of connection of clients to the server. An Authentication functionality is also provided by the framework, which supports the initial registration of users (i.e., the Participants), and allows to choose if the connection of users should be free or should be authenticated against a list of known users. Currently, the authentication does not implement any sophisticated security mechanism, but just a check on a list of names. CAFE also offers a Floor Control mechanism to allow the Coordinator to selectively block/unblock users: through the Control Panel the Coordinator can block/unblock a single user, a group of users or all the team. The Latecomers management provided by CAFE supports the late joining of clients and the synchronization of their state. Details of this mechanism are presented in [16]. We have to emphasize that the most important feature provided by CAFE is the Tools integration mechanism: it defines an extension point (a mechanism inherited by the Eclipse architecture) to integrate into the architecture any CAFE-based tool. This integration mechanism allows to detect, at startup, available tools that can be, therefore, launched at runtime. 4.2 The CAFE tools plug-ins The CAFE tool (server and client) plug-ins allow the communication between the CollabXMind plug-in and the CAFE framework. CAFE-based tools are integrated into the CAFE framework through the standard Eclipse extension point mechanism: they extend the extension point defined by CAFE and implement the classes and the interfaces required to launch the tool and to use the communication functionalities. The Java classes that allow the integration of any tool include: the class which manages the start up of the tool the class which implements the Shared Object the class which implements the UI of the tool The basic implementation of all these classes is provided by CAFE, so the CAFEbased tools have to provide just their specific implementation details. 4.3 The CollabXMind plug-in The main goal of the CollabXMind plug-in is to introduce the collaboration functionalities in XMind. The CollabXMind plug-in is responsible of intercepting the actions of users and to pass them to the CAFE framework (and vice versa). The general idea of our approach is to intervene in the implementation of the Model-View-Controller (MVC) pattern provided by XMind and add new components to intercept the requests of users and manipulate them; our components send the intercepted requests to the CollabXMind plug-in on the Server Component, that, first, applies policies and access 9

20 Fig. 4. How the CollabXMinds plug-in components intervene in the implementation of the MVC pattern. control mechanisms (through the Policy controller as shown in Fig. 3), then processes the requests and, finally, sends the generated events to all available clients. XMind implements the Model-View-Controller pattern through GEF (Graphical Editing Framework) [20], an Eclipse plug-in supporting the creation of graphical editors. The components defined by GEF to implement the MVC pattern are the Model, the View and the EditPart (i.e., the Controller). Our approach adds some components in the GEF MVC implementation to intercept the users actions on the map and pass them to our communication framework. Details about this implementation have been described in [21]. The Fig. 4 depicts the details of the architecture built on the GEF MVC implementation, by illustrating the process of the input of a client. The CollabXMind plug-in, on the Client component replaces the original EditDomain of XMind (a component that dispatches the request to the right EditPart) with a new component namely CollabEditDomain. This replacement does not require changes on the source code of XMind because it can be done through the APIs of GEF. The CollabEditDomain intercepts the requests going from the View to the EditPart (and prevents the requests to be processed by the EditPart) and sends them to the EditPart of the CollabXMind plug-in (Server-side) through the communication framework. In the CollabXMind plug-in, on the Server Component, all client requests are enqueued and then processed by the Policy controller, that, on the other hand, checks the user s permissions and forward the request to the XMind EditPart; it performs the required operations on the Model that notifies the registered listeners (event changes are notified to all clients). In the CollabXMind plug-in, on the Client Component, an handler receives these changes and updates the corresponding Model. The Model notifies of the change its listeners, including the EditPart which updates the View. The process of a request of the user playing as Coordinator is slightly different: its input is normally processed by the XMind EditDomain and then by the XMind EditPart; 10

21 when the EditPart updates the Model, it notifies the listeners, including the CollabX- Mind Model Listener which sends the event to all clients. 5 Conclusions The work of extending a single-user application to make it collaborative is not new, in fact several examples exist in literature about the possibility of building multi-user applications starting from their single-user version. There are several reasons to evaluate the possibility of introducing collaboration in an existing application instead of developing from scratch new collaborative applications. In many cases, the key point is that the collaborative applications should be, from the provided functionalities point of view, exactly the same of the single-user applications, to preserve the realm of loyal users to those applications, but at the same time, it has to provide additional collaboration features. From the developer s point of view, it usually means a lot of additional work to re-implement the existing applications preserving their behaviors and just adding the collaboration features. From the user s point of view, the user has to substitute the preferred and well-established application with a new one, in order to add collaboration. This is not simple and is often costly and, therefore, most users will prefer to continue to use their preferred (single-user) application, in combination with simple tools for collaboration and document sharing like instant messaging systems and . Then, to face needs and wishes of both developers and users, several studies consider the solution of extending single-user applications without changes to their original behavior and simply adding new collaborative functionalities. As briefly reported in the Section 1, so far several studies have proposed different approaches, highlighting several issues depending on the approach and on the target application. A brief review of existing approaches is presented in [22]. The key factors that enable this effective extension are extensibility and composability, which are critical to the success and popularity of the Eclipse Platform. By leveraging on the Eclipse Platform and on its main features we have provided an example of construction of a multi-user application, starting from a single-user version, without applying any change to the original source code and simply adding groupware features to make it collaborative. Our result is collaborative software based on a fully-fledged application, such as XMind, a well-known software, whose effectiveness and innovation is witnessed by the multiple awards, within the (quite large) Eclipse community but also from the wider (and more heterogeneous) audience from Sourceforge. We have presented the CollabX- Mind application, a Rich Client Application that allows multiple users to create mind maps as they would by using XMind, but in collaborative way, by exploiting a provided underlying collaboration framework. It must be emphasized that this was possible since the extendibility and the composability of the underlying architecture. The componentbased nature of the Eclipse architecture, in fact, allowed us to arrange several components: XMind, the CollabXmind plug-in and the CAFE framework. We have used XMind as a plug-in (instead of as a Rich Client Application), and then we have arranged it with CollabXMind, a new plug-in which implements the com- 11

22 ponents able to introduce collaboration features in the original application. Also the integration of XMind and CollabXMind with CAFE was possible thanks to the nature of the Eclipse architecture, and to its standard Extension-point mechanism. A remarkable consideration is that CollabXMind, beyond being a Rich Client Application, is also a CAFE-based tool, simply thanks to the integration into the CAFE framework. This has an important two-way implication: CollabXMind can be used as a collaborative tool in the CoFFEE suite, but at the same time, CollabXMind as Rich Client Application can use any other CAFE-based tool. This allows CollabXMind to effectively use the CoFFEE tools to support collaboration features like team awareness (with the Presence tool), simple communication (with the Chat tool), structured communication (with the Threaded Discussion tool), voting (with Positionometer), and so on. This high level of composability and integration derives from the architecture of CAFE and its implementation based on the Eclipse Platform. The final valuable result is that any CAFE-based collaborative environment, and then both CollabXMind and CoFFEE, has a wide set of collaborative features inherited from the framework (such as the server discovery, the communication framework, the latecomers management, etc.) and of collaborative tools (all the CAFE-based tools, from a simple Presence tool to a more complex CollabXMind application). Finally, we are planning to make the CollabXMind application available on Source- Forge as a new project in a short time (likely by next autumn) and to conduct further studies to evaluate the usability and effectiveness of the application. References 1. Sarma, A., van der Hoek, A., Cheng, L.: A Need-Based Collaboration Classification Framework. In: Proc. on Eclipse as Vehicle for CSCW Research, Workshop at CSCW (2004) 2. Knister, M.J., Prakash, A.: Distedit: a distributed toolkit for supporting multiple group editors. In: CSCW 90: Proceedings of the 1990 ACM conference on Computer-supported cooperative work, New York, NY, USA, ACM (1990) Lauwers, J.C., Joseph, T.A., Lantz, K.A., Romanow, A.L.: Replicated architectures for shared window systems: a critique. SIGOIS Bull. 11(2-3) (1990) Garfinkel, D., Welti, B., Yip, T.: HP Shared X: a tool for real time colalboration. HP Journal 45,2 (1994) Microsoft: Windows Meeting Space. windows-vista/features/meeting-space.aspx 6. SunForum: SunForum 3.0 Software User s Guide. doc/ Begole, J., Rosson, M.B., Shaffer, C.A.: Flexible collaboration transparency: supporting worker independence in replicated application-sharing systems. ACM Trans. Comput.-Hum. Interact. 6(2) (1999) Xia, S., Sun, D., Sun, C., Chen, D., Shen, H.: Leveraging single-user applications for multiuser collaboration: the coword approach. In: CSCW 04: Proceedings of the 2004 ACM conference on Computer supported cooperative work, New York, NY, USA, ACM (2004) Sun, C., Xia, S., Sun, D., Chen, D., Shen, H., Cai, W.: Transparent adaptation of single-user applications for multi-user real-time collaboration. ACM Trans. Comput.-Hum. Interact. 13(4) (2006)

23 10. Pichiliani, M., Hirata, C.: A guide to map application components to support multi-user real-time collaboration. International Conference on Collaborative Computing: Networking, Applications and Worksharing (2006) Buzan, T., Buzan, B.: The Mind Map Book: Available from Amazon.comShared Visions Unlimited Reviews Home PageHow to Use Radiant Thinking to Maximize Your Brain s Untapped Potential. E. P. Dutton (1994) 12. Wikipedia: Mind-mapping software. mapping_software (Accessed on July, 2010). 13. Shih, P.C., Nguyen, D.H., Hirano, S.H., Redmiles, D.F., Hayes, G.R.: Groupmind: supporting idea generation through a collaborative mind-mapping tool. In: GROUP 09: Proceedings of the ACM 2009 international conference on Supporting group work, New York, NY, USA, ACM (2009) XMind: Mind mapping. (Accessed on July, 2010). 15. Eclipse: Rich Client Platform (RCP). Client_Platform 16. De Chiara, R., Di Matteo, A., Ilaria Manno, Scarano, V.: CoFFEE: Cooperative Face2Face Educational Environment. In: Proceedings of the 3rd International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2007), November 12-15, 2007, New York, USA. (2007) 17. CoFFEE: CoFFEE at Sourceforge:. (2010) 18. De Chiara, R., Manno, I., Scarano, V.: CoFFEE: an Expandable and Rich Platform for Computer-Mediated, Face-to-Face Argumentation in Classroom. In: In Educational Technologies for Teaching Argumentation Skills. Bentham ebooks (in press) 19. ECF: Eclipse Communication Framework (2010) 20. Eclipse: Graphical Editor Framework. (Accessed on July, 2010). 21. Belgiorno, F., Malandrino, D., Manno, I., Palmieri, G., Pirozzi, D., Scarano., V.: Introducing Collaboration in Single-user Applications through the Centralized Control Architecture. Submitted for publication, June Pichiliani, M.C., Hirata, C.M.: A technical comparison of the existing approaches to support collaboration in non-collaborative applications. Collaborative Technologies and Systems, International Symposium on (2009)

24 An Eclipse-based IDE for Featherweight Java implemented in Xtext Lorenzo Bettini 1 Dipartimento di Informatica, Università di Torino, Svizzera, Torino, Italy Abstract. Developing a compiler and an IDE for a language is usually time consuming; even when relying on a framework like Eclipse, which already provides typical IDE artifacts, still implementing the parser, the model for the AST, and connect all the language features to the IDE components requires lot of manual programming. In this paper we present our implementation of Featherweight Java (a lightweight version of Java which is typically used when formalizing Java-like languages) in Eclipse by relying on Xtext (a framework for development of programming languages in Eclipse). Xtext eases the task of implementing a compiler and an IDE based on Eclipse by providing a high-level framework that generates most of the typical and recurrent artifacts necessary for a fully-fledged IDE on top of Eclipse, and allows the programmer to customize every aspect. 1 Introduction When designing/developing new languages, in the context of programming language theory research, the development stage usually comes only after the complete formalization of the language and its proof of type safety (especially in a statically typed scenario). Indeed the implementation stage, even if only on a prototypical form, is always time consuming, even for toy languages: writing the parser using a compiler generator (like, e.g., Flex & Bison [20] and ANTLR [21]), building the abstract syntax tree (AST), perform all the visits on the AST (e.g., type checking), and finally generate code into a target language or build the interpreter. However, it would be useful to develop the language implementation while studying the theory of the language so that one can soon test the language features and integrate the experience of the language use in the design stage (apart from possibly finding bugs in the theory while writing some test programs). Thus, having a tool for language implementations that makes the development cycle of adding a new feature to a language really rapid is a crucial feature in this context. Finally, having an integrated development environment (IDE) for the language, while not being a fundamental feature for a language implementation, is something that nowadays programmers are used to. Mechanisms like code completion, program outline, syntax highlighting, and so on, are typical of many professional programming editors, and surely increment productivity. Building an IDE from scratch for a language makes a little sense, since it would require to reimplement many recurrent functionalities, thus it is always better to rely on an existing one. Work partially supported by MIUR (PRIN 2009 DISCO). 14

25 With this respect, Eclipse provides an extensible and powerful framework for building programming language editors, thanks to its plugin architecture [15], covering all the aspects of professional IDEs for enhancing productivity. However, the procedure for building a plugin for your own programming language for Eclipse is still quite laborious and requires a lot of manual programming. XTEXT [4], a framework for development of programming languages as well as other domain-specific languages (DSLs), eases this task by providing a high-level framework that generates most of the typical and recurrent artifacts necessary for a fully-fledged IDE on top of Eclipse. In XTEXT, the syntax of the language is defined using an EBNF grammar. Starting from this grammar, the XTEXT generator creates a parser, an AST-meta model (implemented in EMF [24]) as well as a full-featured Eclipse-based editor. The plugins generated by XTEXT already implement most of the recurrent artifacts for a language IDEs, and they can be easily customized by injecting (by relying on Google-Guice [1], see later in the paper) our own specific language mechanisms implementations. In particular, as we will show in the paper, the code we need to write for customizing the IDE is minimal. In this paper, we present the prototypical implementation of FEATHERWEIGHT JAVA (FJ) [19, 23], a lightweight version of Java, which focuses on a few basic features: mutually recursive class definitions, inheritance, object creation, method invocation, method recursion through this, subtyping and field access. In particular, a FJ program is a list of class definitions and a single main expression (see the example in Listing 1). Since in FJ the class constructor has a fixed shape (it takes as many parameters as the fields in the class, with the same name, including the inherited ones, and it initializes the fields of the class with the passed arguments, after having passed the arguments for the inherited fields to the superclass constructor), we simplified the language by assuming such constructors as implicit. The minimal syntax, typing and semantics make the type safety proof simple and compact, in such a way that FJ is a handy tool for studying the consequences of extensions and variations with respect to Java ( FJ s main application is modeling extensions of Java, [23], page 248). For this reason, we personally used FJ as the starting point for studying language extensions to Java [9, 8, 6, 10, 7] and we felt the need of a rapid development framework for building editors and compilers for our languages. This paper can be seen as an experience report on using XTEXT for building a fully-fledged Eclipse-based IDE for a general purpose language; although FJ is not a complete object-oriented programming language (like Java itself), still it deals with typical object-oriented features, thus it still requires a wide effort during the implementation. Furthermore, while XTEXT defines itself as a framework mainly for domain specific languages (DSL), this paper also provides an evidence for XTEXT use for general purpose languages (with this respect, XTEXT only ships with DSL examples). The implementation of FJ is available as an open source project at Throughout the paper, we assume the reader is familiar with building eclipse plugins; also, a generic knowledge of EMF would be required, though we will sketch the main features of EMF during the description of the implementation. 15

26 class A extends Object { } class B extends Object { } class Pair extends Object { Object fst; Object snd; Pair setfst(object newfst) { return new Pair(newfst, this.snd); } } Pair setsnd(object newscd) { return new Pair(this.fst, newscd); } new Pair(new A(), new B()).setfst(new A()).fst Listing 1: An example of FJ program. The paper is structured as follows. In Section 2 we describe our implementation in XTEXT. In Section 3 we discuss our experience with XTEXT. Related work is discussed in Section 4. We conclude by outlining some future work. 2 Implementing FJ with XTEXT In this section, we describe the implementation of FJ using the XTEXT [4] framework for Eclipse. We will not provide all the details of the implementation; for instance, we will not detail how we implemented the type system and the type checker, since they are not interesting in this context. Instead, we will stress what we need to provide to XTEXT (i.e., specific class and method implementations) so that the framework can do the rest of the job in implementing all the IDE functionalities. The first task in XTEXT is to write the grammar of the language using an EBNF-like syntax. For completeness we show the complete XTEXT grammar for FJ in Listing 2. The reader who is familiar with ANTLR [21] will note that the syntax of XTEXT grammars is very similar to ANTLR s syntax, though a little bit simpler. Starting from this grammar, XTEXT generates an ANTLR parser [21]. The generation of the abstract syntax tree is handled by XTEXT as well. In particular, during parsing, the AST is automatically generated in the shape of an EMF model (Eclipse Modeling Framework [24]). Thus, the manipulation of the AST can use all mechanisms provided by EMF itself. Note that the automatic generation of the AST by XTEXT already reduces the programmer s job a lot; furthermore, by relying on a modeling framework such as EMF, the programmer does not even have to create all the hierarchy for modeling the AST. In particular, XTEXT connects the EMF model representing the AST with the textual form of the program in the text editor. These two forms of the program are kept in synchronization automatically by the framework. This gives the language developer for free IDE functionalities such as the outline view (we refer a 16

27 grammar with org.eclipse.xtext.common.terminals generate fj "" Program : (classes += Class)* (main = Expression)? ; Type: basic=( int boolean String ) classref=[class] ; TypedElement: Field Parameter; Class: class name=id ( extends extends=[class])? { (fields += Field)* (methods += Method)* } ; Field: type=type name=id ; ; Parameter: type=type name=id ; Method: returntype=type name=id ( (params+=parameter (, params+=parameter)*)? ) { body=methodbody } ; MethodBody: return expression=expression ; ; Expression: TerminalExpression ({Selection.receiver=current}. message=message )* ; Message: MethodCall FieldSelection ; MethodCall: name=[method] ( (args+=argument (, args+=argument)*)? ) ; FieldSelection: name=[field]; TerminalExpression returns Expression: This Variable New Cast Paren ; This: variable= this ; Variable: variable=[parameter]; New: new type=[class] ( (args+=argument (, args+=argument)*)? ) ; Cast: ( type=[class] ) object=terminalexpression; Paren returns Expression: ( Expression ) ; Argument: Expression; Listing 2: The FJ grammar implemented in Xtext. 17

28 later paragraph in this Section and to Figure 1). But most of all, it allows programmatically modifying the program by acting on the model representing it, without the need of accessing its textual form in the editor (see the quickfix implementation in Listing 5 in Section 3), which will be automatically updated to reflect the changes in the model. There is a direct correspondence between the names used in the rules of the grammar and the generated EMF model JAVA classes. For instance, if we have the following grammar snippet: 1 Selection returns Expression: receiver=expression. message=message; Message: MethodCall FieldSelection ; MethodCall: name=[method] ( (args+=argument (, args+=argument)*)? ) ; FieldSelection: name=[field]; the XTEXT framework generates the following EMF model JAVA interface (and the corresponding implementation class): public interface MethodCall extends Message { Method getname(); void setname(method value); EList<Argument> getargs(); } Besides, XTEXT generates many other classes for the editor for the language that we are implementing. The editor contains syntax highlighting 2, background parsing with error markers, outline view, code completion. Most of the code generated by XTEXT can already be used off the shelf, but other parts can or have to be adapted by customizing some classes used in the framework. The usage of the customized classes is dealt with by relying on Google-Guice [1], so that the programmer does not have to maintain customized abstract factories [17]. In this way it is very easy to insert custom implementations into the framework ( injected in Google-Guice terminology), with the guarantee that the custom classes will be used consistently throughout the code of the framework. The validation mechanisms for the language must be provided by the language developer. In our case, this is the FJ type system. Implementing the validation mechanism in a compiler usually requires to write specific visitors for the abstract syntax tree. EMF already simplifies this task by providing a switch-like functionality to efficiently execute methods with dynamic dispatch according to the actual type of an AST node. Thus, there is no need to add code to implement a visitor structure [17]. Note that this dynamic 1 If the reader compares this simplified grammar snippet with the complete grammar in Listing 2 she will note a different rule for method invocation; this is due to the fact that in order to deal with left recursion, which LL-parsing tools cannot handle directly, we need to left-factor the grammar [5]. 2 Note that typically, when implementing manually a plugin for a language in Eclipse, writing the parser does not automatically provides a syntax highlighter, which must be implemented manually in Java. 18

29 public class FJTypeChecker extends FjSwitch<String> {... public String casefield(field field) {...} public String casemethod(method method) {...} public String casecast(cast cast) { TypeResult objecttype = typesystem.gettype(cast.getobject()); if (!subtyping.issubtype(objecttype.gettype(), cast.gettype()) &&!subtyping.issubtype(cast.gettype(), objecttype.gettype())) return "expression type " + objecttype.gettype() + " and " + cast.gettype() + " are unrelated"; } return ""; }... public String typecheck(eobject object) { String errors = doswitch(object); return errors; } Listing 3: The typechecker implementation (snippet). selection of methods according to the run-time type of the argument (dynamic overloading) is implemented efficiently: instead of using run-time type information checks and casts, the code generated by EMF performs a switch block using the unique integer identifiers of the classes of the generated EMF model. Thus, this selection basically performs in constant time (since the switch instruction is implemented in constant time). In order to use this functionality it is enough to inherit from the generated class implementing the switch functionality (in our case the generated EMF FjSwitch base class), and provide the methods for all the model classes we want to deal with (the generated switch functionality provides default cases for the classes for which we do not provide a case and also handles class inheritance in the model class hierarchy). For instance, we used this technique to implement the type checker (the generic parameter of the switch class represents the return type of the case methods) as (partially) shown in Listing 3. In particular, this class relies on other objects (whose classes are not shown here) that we implemented for inferring a type of an AST node (typesystem) and for checking the subtyping (subclass) relation (subtyping). In our implementation we use strings to return possible type errors, thus, if the method returns an empty string it means that the checked expression is well-typed. XTEXT leverages this mechanism by only requiring methods with annotation in a customized validator, that will be called automatically for validating the model according to the type of the AST node being checked. We thus implemented a method for some classes of the model representing the AST of an FJ program, and we then have the method typecheck that takes as argument an EObject (the base class for all the classes of a generated EMF model) which simply calls the type checker method typecheck. The validation takes place in the background, together with parsing, while the user is writing a FJ program, so that an immediate feedback is available, as usually 19

30 in IDEs. For instance, the following method in the FJJavaValidator checks that the cast expression is public void checkcast(cast cast) { String errors = typechecker.typecheck(cast); if (errors!= null && errors.length() > 0) { error(errors, FjPackage.CAST TYPE); } } Note that the only important things in this method definition are annotation and the parameter: the internal validator of XTEXT will invoke this method when it needs to validate an AST node representing a cast expression, i.e., a Cast instance, which corresponds to the Cast rule in the grammar of Listing 2 (remember that given a grammar symbol, XTEXT will generate a corresponding class for the AST with the same name). The implementation of the validation in our case simply delegates to the type checker class shown above. If an error is found during the typechecking, then calling the method error will make XTEXT generate the appropriate error marker (and since we specify the element in the AST which did not pass validation, XTEXT is able to put the marker in the right place without any needed information by the programmer). Binding the symbols (e.g., the binding of a field reference to its declaration) is important in compiler development. When defining the grammar of the language in XTEXT, we can already specify that a specific token is actually a reference to a specific declared element. Going back to the snippet at the beginning of the section, we note that the name of the method in a method invocation is enclosed in square brackets ([Method]). This means that the token representing the method s name in a method invocation is not simply an identifier: it is a reference to the name of a declared Method (see also the complete grammar in Listing 2). For field selection we can do a similar thing (referring to the name of a declared Field). This allows providing more detailed information in the grammar itself, and in the generated EMF model, the name of the method in a method invocation expression will not be simply a string, but it will be a cross reference to an instance of Method. In particular, EMF uses proxies to represent references and it can delay the resolution (binding) of references when they are accessed. XTEXT already provides an implementation for binding references, which basically binds a reference to a symbol n to the first element definition with name n occurring in the model. However, this usually has to be adapted in order to take the visibility (scope) of names in a program into account. For instance, a field is visible only in the methods of a class, such that different hierarchies can safely have fields with the same name. XTEXT supports the customization of binding in an elegant way with the abstract concept of scope. The actual binding is still performed by XTEXT, but it can be driven by providing the scope of a reference, i.e., all declarations that are available in the current context of a reference. Note that this way we can filter out elements according to their kind, e.g., in order not to mix field names with method names if we need to resolve a reference to a field. The programmer can provide a customized AbstractDeclarativeScopeProvider. XTEXT, when it needs to bind/resolve a symbol, will search for methods to invoke, using reflection, according to a convention on method name signatures. Suppose, 20

31 public IScope scope MethodCall name(selection sel, EReference ref) { TypeResult selectionexpressiontype = typesystem.gettype(sel.getreceiver()); Class receivertype = selectionexpressiontype.getclassref(); if (receivertype!= null) return Scopes.scopeFor(auxiliaryFunctions.getMethods(receiverType)); } // return an empty scope return Scopes.scopeFor(new LinkedList<EObject>()); Listing 4: Scope provider for a method invocation expression. we have a rule ContextRuleName with an attribute ReferenceAttributeName assigned to a cross reference with TypeToReturn type, that is used by the rule ContextType. You can create one or both of the following two methods IScope scope <ContextRuleName> <ReferenceAttributeName> (<ContextType> ctx, EReference ref) IScope scope <TypeToReturn>(<ContextType> ctx, EReference ref) The XTEXT binding mechanism looks for the first method (by reflection), if this does not exist, then it looks for the second. If no such method exists, the default linking semantics (see above) is used. These methods are supposed to return the set of all visible elements in that part of the program (represented by the passed context); using the returned scope, XTEXT will then take care of resolving a cross reference (or issue an error in case the reference cannot be solved). If XTEXT succeeds in resolving a cross reference, it also takes care of implementing the functionalities Eclipse users are used to, e.g., by Ctrl+clicking on a reference (or using F3) we can jump to the declaration for that symbol. For instance, in a FJ program, all classes defined in the program are visible (there is no need of specifying the classes in a specific order), thus, when a class reference needs to be resolved in a program, we can simply provide as the scope the collection of all the classes in the program (obtained through some auxiliary functions not shown here; the Scopes class is part of the XTEXT library): public IScope scope Class(Program p, EReference type) { return Scopes.scopeFor(auxiliaryFunctions.collectClasses(p)); } Instead, if we consider the grammar rule for method invocation illustrated at the beginning of this section, we can drive the resolution of the method name in a method invocation statement in any expression where such statement can occur by defining the method shown in Listing 4 (The code should be understandable without the knowledge of XTEXT). The scope provider will be used by XTEXT not only to solve references, but also to implement code completion. Thus, a programmer achieves many goals by implementing only the abstract concept of scope. Note that the code above can also return an 21

32 empty scope, e.g., if the receiver expression in a method call cannot be typed. In that case, the XTEXT framework generates an error due to an unresolvable method name during validation, and an empty code completion list in case the programmer requests content assistance when writing the method name of a method invocation expression. This mechanism is handled by the framework itself, so that the programmer is completely relieved from these issues, once the correct scope provider is implemented. XTEXT provides a (mostly) automatic support for file import/inclusion in the developed language by using grammar rules like the following Import : import importuri=string; In our implementation we did not use this functionality, but it could be easily added so that FJ programs can be split into separate files, and include other FJ files using the import keyword. The corresponding dependencies among source files are handled by XTEXT itself. Thus, the EMF model for the AST corresponding to an included file is available automatically in the current edited source. Moreover, the modification of an included file f automatically triggers the re-validation of all the files including f. Finally, the code generation phase is dealt with in XTEXT by relying on XPAND [3], a code generation framework based on templates, specialized for code generation based on EMF models. This generation phase can reuse the lookup functions and the type system functions used during validation. An XPAND template consists of verbatim parts which will be output as they are and of parts to be expanded, enclosed in the special characters «and». These parts can refer to other template files. For FJ we did not need to implement a code generator (and indeed, this step is completely optional in XTEXT, and the code generation might be carried on also with other tools), since an interpreter would be more appropriate. However, since we wanted to focus on the typechecking part for FJ and on the IDE functionalities, we are still working on the interpreter. Figure 1 shows a screenshot of the FJ editor; note the code completion functionalities, the outline and the error markers (note also the folding of multiple line elements, like classes and methods, which is handled automatically). Also the outline view is handled transparently by XTEXT; the default implementation of the outline view is to simply show the EMF model representing the abstract syntax tree of the program. This is not always the right thing, especially for a general purpose language. Again, the outline view can be customized by not showing specific elements (in our case we do not show the body of methods) and by providing a customized label provider (in our case we show the name of fields and their types and the names of methods with their signatures). Besides that, the synchronization between the outline view and the contents of the editor is handled completely by XTEXT. 3 Evaluation Our experience with XTEXT was in general quite positive. The main issue (which is typical of such high level frameworks) is that some time is required to get acquainted with the concepts of the framework. In particular, XTEXT relies on EMF, thus, one should be familiar with EMF concepts as well, especially, when it comes to analyze the 22

33 Fig. 1. The IDE for FJ. model for validation and code generation. However, after this knowledge is achieved, developing a language compiler and an IDE using XTEXT is extremely fast. Surely it is much faster than implementing an IDE from scratch by only relying on the eclipse framework: as illustrated in Section 2 all the connections among the typical eclipse artifacts for editors, builders, etc., are established and handled by XTEXT directly. As we described in Section 2 by using our own validator, XTEXT can create the error markers in the editor (and also in the Problem View ) related to the EMF element which caused the error. We can provide our own quickfixes by simply deriving from an XTEXT default class and providing a method which refers to a specific error detected by our validator. For instance, the code shown in Listing 5 provides a quickfix in case of duplicate classes in a program, by proposing to remove one class 3. Note how easy it is to implement that fix: since we deal with EMF model objects, which are connected by XTEXT to the program text in the editor, we can simply manipulate the EMF model (in this case we remove the class from the program) and the program text will reflect this change; in particular, we do not need to manipulate the eclipse text editor document at all. Figure 2 shows our quickfix in action. XTEXT seems to be the right tool to experiment with language design and to develop implementations of languages. Furthermore, experimenting with new constructs in the language being developed can be handled straightforwardly. It requires to modify the grammar, regenerate XTEXT artifacts and to deal with the cases for the new constructs. Finally, XTEXT allows the programmer to customize every aspect of the developed language implementation using specialized code (which is flexibly injected in XTEXT 3 The Class class in the code snippet is not java.lang.class, but the Class of our EMF model, representing a FJ class. 23

34 public class FJQuickfixProvider extends DefaultQuickfixProvider CLASS NAMES) public void removeduplicateclass(final Issue issue, IssueResolutionAcceptor acceptor) { acceptor.accept(issue, "Remove class",... new ISemanticModification() { public void apply(eobject element, IModificationContext context) { Class duplicateclass = (Class)element; Program prog = (Program)duplicateClass.eContainer(); prog.getclasses().remove(duplicateclass); } } ); } } Listing 5: An example of quickfix implementation. Fig. 2. The quickfix in action. using Google Guice), even though XTEXT hides many internal details of IDE development with Eclipse. Even EMF mechanisms are still open to adaptation. For instance, we developed a customized EMF resource factory for synthesizing the implicit class Object (in FJ the class Object is implicitly defined in every program, and it does not contain any field or method). In order to inject our resource factory, FJResourceFactory, we simply need to add a specific method in the runtime module of our language plugin (that will be used internally by XTEXT): public Class<? extends XtextResourceFactory> bindxtextresourcefactory() { return FJResourceFactory.class; } Just adding this binding ensures that when XTEXT must instantiate a XtextResource- Factory for our language plugin, it will actually instantiate a FJResourceFactory. Thus, with Google Guice injection mechanisms we have a consistent use of our customized classes without needing to implement manually abstract factory or factory methods patterns [17]. XTEXT provides some useful functionalities to write Junit tests for many language development components; in particular, it generates a stand-alone class (in our case FJ- 24

35 public class CastTest extends AbstractXtextTests { protected void setup() throws Exception { super.setup(); with(new FJStandaloneSetup()); } } public void testcastfail() throws Exception { Program program = (Program) getmodel("class B { } class A { } (B) new A()"); Expression main = program.getmain(); String errors = new FJTypeChecker().typeCheck(main); assertequals("expression type A and B are unrelated", errors); } Listing 6: The Junit test for typechecking of a cast expression. StandaloneSetup) for the developed language with all the functionalities to correctly initialize all the EMF mechanisms so that the language can be tested as a stand-alone application (i.e., outside from Eclipse). This way, we can easily test non-ui parts of our language with no need of manually running an Eclipse application, writing a code snippet in our language, and checking that the error mark shows up 4. This speeds up the development of the language; for instance, in Listing 6 we show a snippet of the test class for our type checker (compare the expected error with the type checker code for cast expression shown in Listing 3). We would like to conclude this Section by stressing that XTEXT lets the language developer concentrate on the aspects that are typical of her own language, while relying on the framework for all the other recurrent jobs. Just to mention a fact: for developing our FJ eclipse based IDE, we did not have to write a single extension point. 4 Related Work There are other tools for implementing both domain specific and general purpose languages and their text editors and IDE functionalities (we also refer to [22] for a comparison). Tools like IMP (The IDE Meta-Tooling Platform) [2] and DLTK (Dynamic Languages Toolkit) [14] only deal with IDE functionalities and leave the parsing mechanism completely to the programmer, while XTEXT starts the development cycle right from the grammar itself. Another framework, closer to XTEXT is EMFText [18]. EMF- Text basically provides the same functionalities. But, instead of deriving a meta-model from the grammar, it does the opposite, i.e., the language to be implemented must be defined in an abstract way using an EMF meta model. (A meta model is a model describing a model, e.g., an UML class diagram describing the classes of a model). Note that XTEXT can also connect the grammar rules to an existing EMF meta model, instead 4 For the moment, XTEXT does not provide corresponding utility test classes for UI parts, e.g., code completion, quickfixes, etc. However, one can start from the UI tests of the source code of XTEXT itself to write Junit tests also for these UI components of the developed language. 25

36 of generating an EMF meta model starting from the grammar. Furthermore, XTEXT has also a wizard to generate an XTEXT grammar starting from a EMF meta model. XTEXT seems to be better documented than EMFText (indeed, both projects are still young and always under intense development), and more flexible, especially since it relies on Google Guice. On the other hand, EMFText offers a language zoo with many examples that can be used to start the development of another language. In this respect, the examples of languages implemented using XTEXT, that we found on the web, are simpler DSLs, and not programming languages like FJ. Thus, this paper can also be seen as a report of effective usage of XTEXT for implementing more complex programming languages. EriLex [25] is a software tool for generating support code for embedded domain specific languages and it supports specifying syntax, type rules, and dynamic semantics of such languages. EriLex does not generate any artifact for IDE functionalities, and it concentrates on other aspects of language development such as type systems and operational semantics, providing an easy to use syntax for developing these compiler parts (which are really close to the formal presentation of type systems and semantics rule). MPS (Meta Programming System) [16] is another tool for developing a DSL, and it also provides IDE functionalities, but it does not target Eclipse and its well-known functionalities (it uses its own technology for every language development phases); thus, while it surely is an interesting technology, it probably requires much more time to learn it, in case the programmer is already familiar with the Eclipse plugin technology. Neverlang [13] is based on the fact that programming language features can be easily plugged and unplugged. A complete compiler/interpreter can be built in Neverlang as the result of a compositional process involving several building blocks. Each block deals with a single programming feature and the necessary support code, such as type checking and code generation. With respect to composition functionalities, XTEXT allows the programmer to mix grammars (so called grammar mixins ) and also to reuse recurrent syntax artifacts (like the standard terminal definitions, see the with statement at the beginning of Listing 2). However, these compositional functionalities are not yet as powerful as the ones provided by Neverlang. 5 Conclusions and Future Work In this paper we described an implementation of FJ as an Eclipse-based IDE, using the framework XTEXT. Our experience with XTEXT shows that the framework is a rapid tool for implementing a programming language, both from the compiler point of view, and from the IDE functionalities. Our implementation, freely available from was the starting point for the implementation of more involved programming languages, such as SUGARED WELTERWEIGHT RECORD-TRAIT JAVA, that we presented in [12], which is based on the calculus presented of [11], formalized by extending FJ. We are planning to keep on using XTEXT also for implementing other programming languages that we studied formally (and also toy language extensions). As for the FJ implementation, we will use it as a case study for experimenting with several XTEXT 26

37 features (e.g., we plan to enhance the language itself with import functionalities and to implement an interpreter for FJ programs); indeed, our final goal might be also to have a full Java IDE (like the one of JDT) re-implemented in XTEXT. But this will surely require much more time (at least from the typechecking functionalities point of view). It would be interesting to extend XTEXT itself in order to provide a richer framework for developing type systems (like [25, 16]), e.g., by providing a DSL (developed in XTEXT) for type rules that act on the model of the language. We would like also to investigate more powerful language/grammar composition functionalities of XTEXT by taking into consideration Neverlang [13] as the main comparison. Acknowledgments. I am grateful to the developers of XTEXT, in particular, Sven Efftinge and Sebastian Zarnekow, for their prompt help and support during the development of FJ. References 1. Google guice. 2. Imp (the ide meta-tooling platform). 3. Xpand. 4. Xtext a programming language framework. 5. A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman, editors. Compilers: principles, techniques, and tools. Addison Wesley, 2nd edition, L. Bettini, V. Bono, and B. Venneri. Object incompleteness and dynamic composition in Java-like languages. In Proc. TOOLS, volume 11 of LNBIP, pages Springer, L. Bettini, V. Bono, and B. Venneri. Delegation by object composition. Science of Computer Programming, In Press, L. Bettini, S. Capecchi, and E. Giachino. Featherweight Wrap Java: wrapping objects and methods. Journal of Object Technology, 7(2):5 29, Special Issue OOPS Track at SAC L. Bettini, S. Capecchi, and B. Venneri. Featherweight Java with Multi-Methods. In Proc. of PPPJ, Principles and Practice of Programming in Java, volume 272, pages ACM Press, L. Bettini, S. Capecchi, and B. Venneri. Featherweight Java with Dynamic and Static Overloading. Science of Computer Programming, 74(5-6): , L. Bettini, F. Damiani, and I. Schaefer. Implementing Software Product Lines using Traits. In Proc. of OOPS, Track of SAC, pages ACM, L. Bettini, F. Damiani, I. Schaefer, and F. Strocco. A Prototypical Java-like Language with Records and Traits. In Proc. of PPPJ, Principles and Practice of Programming in Java. ACM, To appear. 13. W. Cazzola and D. Poletti. DSL Evolution through Composition. In Proc. of ECOOP Workshop on Reflection, AOP and Meta-Data for Software Evolution (RAM-SE). ACM, P. Charles, R. Fuhrer, S. Sutton Jr., E. Duesterwald, and J. Vinju. Accelerating the creation of customized, language-specific IDEs in Eclipse. In OOPSLA, pages ACM, E. Clayberg and D. Rubel. Eclipse Plug-ins. Addison-Wesley, 3rd edition, M. Fowler. A Language Workbench in Action - MPS

38 17. E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, F. Heidenreich, J. Johannes, S. Karol, M. Seifert, and C. Wende. Derivation and Refinement of Textual Syntax for Models. In ECMDA-FA, volume 5562 of LNCS, pages Springer, A. Igarashi, B. Pierce, and P. Wadler. Featherweight Java: a minimal core calculus for Java and GJ. ACM Transactions on Programming Languages and Systems, 23(3): , J. Levine. flex & bison. O Reilly Media, T. Parr. The Definitive ANTLR Reference: Building Domain-Specific Languages. Pragmatic Programmers, May M. Pfeiffer and J. Pichler. A comparison of tool support for textual domain-specific languages. In Proc. of OOPSLA Workshop on Domain-Specific Modeling (DSM08), pages 1 7, B. C. Pierce. Types and Programming Languages. The MIT Press, Cambridge, MA, D. Steinberg, F. Budinsky, M. Paternostro, and E. Merks. EMF: Eclipse Modeling Framework. Addison Wesley Professional, 2nd edition, H. Xu. Erilex: An embedded domain specific language generator. In Proc. TOOLS, volume 6141 of LNCS, pages Springer,

39 Using Domain Specific Languages for platform-based software development: the case of Android Antonio Natali and Ambra Molesini Alma Mater Studiorum Università di Bologna Abstract Software development is usually performed with reference to some specific computational platform for reducing the coding effort by exploiting a rich set of pre-built mechanisms. However, each platform injects into the design space its own concepts and architectural constraints that are not reflected into the programming language. Current Eclipse tools allow users to express platform-related concepts in some custom architecture-oriented and model-based language; moreover model-driven development based on meta-models can easily lead to custom IDEs, able to automatize a relevant part of software production. This is discussed here with reference to the Android platform, and the XText technology. Our experience shows that a design based on custom languages can give a great contribute in the field of software development, and can also promote the evolution toward platform-independent languages and design. Key words: Eclipse, XText, Android, Domain Specific Language, Metamodeling, Software Architecture 1 Introduction Today, software development is going to become the art of using evolute technological infrastructures here platforms and/or building new abstractions (either domain specific or general-purpose) by exploiting those powerful platforms. As a consequence, software development is strongly related to architectural aspects, but it is quite common that architectural descriptions are expressed in only configuration files, usually written in XML and not in the programming language. Important architectural abstractions, e.g. advanced structure-related concepts (e.g. components) or high level interaction-related concepts (e.g. messages) are not first class citizens in nowadays programming languages. Platforms usually provide a rich set of mechanisms under a minimal set of concepts, so to support a wide set of policies with a reduced coding effort. The problem is that, without a proper coding methodology, it is usually difficult to fully understand the nature of a software system by just reading code based on platform mechanisms. To overcome the problem, it is necessary to introduce a language able to raise the level of abstractions for achieving two main goals: 29

40 2 Antonio Natali and Ambra Molesini i) to directly express the concepts supported by a platform; ii) to introduce new high-level architectural abstractions easily implementable over the platform. This language should allow an user to describe the logical architecture of a system and also domain-specific abstractions, without entering in fine-grained details. A platform expert can now contribute directly to the project, by introducing artifacts that can drive the whole process of software development. Moreover, a formally defined custom language has several other advantages, including the possibility to define a model that can be analyzed to validate a system before it is built. This model can be used as a basis for automatic model-to-code (M2C) or model-to-model (M2M) transformations. The aim of this work is to discuss these points with reference to two concrete technologies: Android 2.1 [1] as an example of a modern platform and XText under Eclipse 3.6 [2] as a tool to introduce platform-specific languages and to enhance the IDE provided by the Android Development Tools (ADT) plugin [3]. XText is a framework that supports the usage of Domain Specific Languages (DSL), that are mainly advocated to describe business functionalities; in fact a DSL can be defined as a focused, processable language for describing a specific concern when building a system in a specific domain. In this work we follow instead the approach advocated in [4] to use DSL technology to define a language to express (custom) architectural aspects of a system. The work is structured as follows. In Section 2 we briefly introduce the Android and XText technologies, while Section 3 presents a possible language for describing the architecture of Android applications with reference to a simple case-study. Section 4 presents our conclusions and outlines future work. 2 The technologies In a well-structured software application, one should recognize three macroparts: a general part, that can be shared among all the applications, a specific part, that is peculiar to the application, and a schematic part which is not general, but still reusable for different applications in the domain. The Android system (Subsection 2.1) can be viewed as a support for the schematic part of mobile applications, since it provides mechanisms related to the three main logical dimensions of a software architecture: the structural part, the interaction part and the behavioral part. The XText framework (Subsection 2.2) is the tool allowing us to express the concepts provided by Android and to build advanced custom IDEs for Android applications. 2.1 Android Google defines Android as a software stack for mobile phones including the operating system, the middleware, and the applications. Android is based on the Linux operating system, and all of its applications are written in Java. However, Android gives direct support to a set of concepts that are not part of the Java conceptual space. In particular, the glossary [5] introduces some of the 30

41 Title Suppressed Due to Excessive Length 3 basic terminology, by giving a definition of a set of terms including Application, Activity, Content Provider, Intent, Intent Filter, Service, Broadcast Receiver. The Android platform can be viewed as the system that provides the operational semantics of these concepts. Since no one of these concepts is reflected in Java, we have an example of a semantic gap between the coding language and the conceptual space that the designer must have in mind to organize Android applications. To help the programmer s job, Android provides an emulator that mimics all of the hardware and software features of a mobile device, except that it can not receive or place actual phone calls. Moreover, it provides an Eclipse plugin (the ADT [3]) to make programming and debugging easier. The ADT Project Wizard helps in creating and setting up all of the basic files needed for a Android application. It partially automates the process of building applications with particular reference to the GUI part, and also provides an editor that helps in writing valid XML Android manifest and resource files. Custom IDEs can be introduced as extensions of the ADT, with the goal to build in automatic way the schematic part of an Android application, by allowing an user to focus on the application s business logic. 2.2 XText XText is a framework that supports the development of language infrastructures. Starting from a grammar that describes both the concrete and the abstract syntax of a language, the XText framework generates a specific text editor for the Eclipse IDE and a parser that in its turn instantiates a EMF Ecore [6] model from the sentences of the language. In this way a user-defined language is also a meta-model and we can specify constraints based on the generated Ecore model by a streamlined OCL (Check), and use all EMF-based tools to process the model. Therefore, models become first class citizens in the software development process, since the executable application is constructively generated from the model, i.e., from a formal, high-level description of the software system to build. According to a model-driven software development approach [7], as many artifacts as possible are derived in automatic way from the formal models. They are mainly source code (such as Java), but also configuration files or scripts required for the execution of a system. Documentation and tests to check the execution of the system or its components can also be generated. 3 A platform-specific language for Android Before entering in the details of the Android conceptual space, let us introduce our reference application: we use an Android phone to send some local data to a remote server via TCP. This application could be part of some other application, e.g. Shazam [8] that uses a mobile phone s built-in microphone to gather a brief sample of music being played. An acoustic fingerprint is created, and is compared 31

42 4 Antonio Natali and Ambra Molesini against a central database for a match; if a match is found, information such as the artist, song title, and album are relayed back to the user. The language to express the Android conceptual space is called AAASL (Android Application Architecture Specification Language); the interested reader can find the definition of its syntax in [9]. The main goal of a AAASL specification is to give a description of the architecture of an Android application, by hiding any details related to the internal organization of a component, and by focussing on the interactions between components. Let us report here a possible AAASL specification for our application: // --- Definition of data --- data unibomsg scheme "msgcontent" host "it.unibo.msg" //(1a) // --- Definition of actions --- action getsound //(2a) data unibomsg category "android.intent.category.default" withanswer action sendsound //(2b) data unibomsg category "android.intent.category.default" //Definition of operations //(3a) op connectasclient return void inputtype String inputtype int op sendaline return void inputtype String op receivealine return String op disconnect return void // --- Definition of services --- ServiceInterface ITcpSender package provides connectasclient, sendaline, receivealine, disconnect ; Service TcpSender implements ITcpSender ; //(3b) //(3c) //(3d) // --- Definition of activities --- Activity AcquireSound //(4a) action getsound button ; //(4a1) Activity FindSound launchable //(4b) callactivity AcquireSound foractiom getsound button //(4b1) execaction sendsound button ; //(4b2) Activity SoundInfo //(4c) action sendsound //(4c1) useservice ITcpSender forop connectasclient arg"\"..\"" arg"80" useservice ITcpSender forop sendaline fromapplication useservice ITcpSender forop receivealine useservice ITcpSender forop disconnect ; //(4c2) // --- Definition of the application

43 Title Suppressed Due to Excessive Length 5 Application SoundInfo //(5) avd 7 package entryactiviy FindSound First data and actions are declared, then the definition of components (services and activities) follows; the specification ends with the definition of the application. This specification can be viewed as a constraint on the collective behavior of the components, i.e. a specification of what they expect from each other, without saying how their internal behavior is organized. The rest of this section is devoted to discuss the rationale behind this specification: first we face the structural dimension (Subsection 3.1), immediately followed by the interaction part (Subsection 3.2); the discussion ends with the issue of component behavior (Subsection 3.3), with reference to activities and services. Space limitation do not allow us to discuss here other kinds of Android components; the interested reader can consult [9]. 3.1 Structure The starting point of any architecture is the structural dimension, which in its turn requires the introduction of the notion of software component. Android distinguishes among a set of different component types: Activity, Service, Content Provider and Broadcast Receiver; these are the parts that can compose an Android application. Each Android application has its own userid, its own separate Linux process and its own instance of the Virtual Machine (VM). Activities. Here, our reference component is the Activity, that in the glossary [5] is defined as a single screen in an application, with supporting Java code, derived from the Activity class. The class is an important part of an application s overall lifecycle (see also Subsection 3.3) and the way activities are launched and put together is a fundamental part of the Android platform s application model. Moreover, the glossary states that most commonly, an activity is visibly represented by a full screen window that can receive and handle UI events and perform complex tasks. Thus, the first specification of the structure of our application could be written in AAASL by introducing the application, named SoundInfo, (line 5 in the previous code) and its main activity, named FindSound (4b). The SoundInfo specification states that the main application component is the FindSound; it is qualified as lauchable so that it can directly started by the user. All the components involved in an Android application must be explicitly declared in a file called AndroidManifest.xml; this XML file can be now automatically built from the specification above. This is a typical configuration task required by the usage of the Android platform. Our first specification states also that the main activity FindSound can execute two actions defined in some other component: the getsound (keyword callactivity, 4b1) and the sendsound (keyword execaction, 4b2). 33

44 6 Antonio Natali and Ambra Molesini Actions. The glossary [5] says that an Action is a description of something that an Intent sender wants done. An action is a string value assigned to an Intent. Since the notion of Intent introduces us into the interaction dimension, we have to enter in the next section. Before doing do, we note that two activities of our specification declare (4a1, 4c1) to perform actions (2a, 2b). We also note that the main activity FindSound does not declare to known the name of the component implementing the required sendsound (4b2), while it makes explicit reference to the AcquireSound activity (4b1). The keyword button in 4b1 and 4b2 is a boolean attribute in meta-model that, when set, states that the specified behavior must be performed under user control via a GUI. GUI. An Android activity is usually a single screen in an application; the Activity base class displays a user interface composed of Views and respond to Events. The UI in Android can be built either by defining XML-code or by writing Java-code. Defining the GUI structure in XML is preferable, because, according to the Model-Viewer-Control principle, the UI should always be separated from the program-logic. Again, an M2C generator associated to the specification above defines an Android XML-layout file for each activity. 3.2 Interaction The main Android components (including Activities and Services) are activated by asynchronous messaging called intents. The glossary [5] says that an Intent is a message object that you can use to launch or communicate with other applications/activities asynchronously. It includes several criteria fields that you can supply, to determine what application/activity receives the Intent and what the receiver does when handling the Intent. Available criteria include the desired action, a category, a data string, the MIME type of the data, and others. A component sends an intent to the Android system, rather than sending it directly to another component. The Android system is responsible for resolving the best-available receiver for each intent, based on the criteria supplied in the intent and the Intent Filters defined by other components. While an intent is effectively a request to do something, Intent Filters describes intents that an Intent Receiver is able to handle. Components must publish their IntentFilters in the AndroidManifest.xml file; this is a configuration task that is automatically done from our AAASL specification. The two most important parts of the intent structure are the action and the data to act upon. Typical values for action are MAIN (the front door of the application), VIEW, PICK, EDIT, etc. The data is expressed as a Uniform Resource Indicator (URI). Data are introduced at the very beginning of an AAASL specification (1), while action declaration immediately follows (2a, 2b). While the Android programmer must deeply understand what an intent is and learn how to use the related programming mechanisms, AAASL lives the concept of intent behind the scene, by viewing it just as an implementation mechanism related to higher-level concepts of the interaction dimension. This 34

45 Title Suppressed Due to Excessive Length 7 is a first shift towards a more platform-independent specification, grounded on the following question: in which way an activity (in general a component) can logically interact with other activities or with other component types? Android supports two main forms of interaction: explicit interaction, in which the caller must known the class of the callee and implicit interaction,inwhichthe caller knows only the name of actions. Our application shows both interaction types, i.e., a call to an activity (4b1) for an action that returns some information (2a) and (4b2) the execution of an action (2b) without any explicit knowledge of the receiver and without any answer. Since the AAASL specification is automatically transformed by the XText framework into a ECore model, M2C/M2M transformers can perform all the job required to implement explicit and implicit interactions, by properly using the communication mechanism based on Android intents. These transformers can be designed so to embed best design practices, e.g. by taking into account appropriate design patterns [10]. Our current strategy is to introduce as many local methods as the interaction operations to be performed. For example, from the specification of explicit interaction 4b1 the following method is generated: protected void Call_AcquireSound() throws Exception { Intent intent = new Intent(FindSound.this,AcquireSound.class); intent.setaction("getsound"); intent.setdata(makeuri("msgcontent://it.unibo.msg/", inputvalue)); int requestcode = ActionCode.acquireSoundActionActionCode; startactivityforresult(intent, requestcode); } This method is generated as part of the class FindSoundSupport (see Subsection 3.3) associated to the FindSound activity; the method can be viewed as an application-specific operation that hides the mechanisms required to implement the interaction pattern specified in the architectural description. In a similar way, from the specification of implicit interaction 4b2 the following method is generated within the FindSoundSupport class: protected void Exec_sendSound() throws Exception { Intent intent = new Intent(); intent.setaction("sendsound"); intent.setdata(makeuri("msgcontent://it.unibo.msg/", inputvalue)); startactivity(intent); } Activity interaction implies navigating from screen to screen, a behavior that is accomplished by resolving intents. To navigate forward, an activity calls startactivity(someintent); the system looks at the intent filters for all installed applications and picks the activity whose intent filters best matches someintent. The new activity is informed of the intent, which causes it to be launched. The operation startactivityforresult assumes that the selected activity will return some information to the calling activity. The Android platform makes the 35

46 8 Antonio Natali and Ambra Molesini returned information available to the calling activity by invoking its callback method named onactivityresult. 3.3 Behavior AAASL aims at hiding any detail related to the internal organization of a component; but, since components are introduced to perform some business logic, M2C/M2M transformers can generate the schematic part of the application only; this is usually included into abstract support classes; these classes must be specialized by the developer in order to complete the system with the applicationspecific part. Sometimes, code sections that must be completed include the comment //TODO. Activities. Android activities are managed by using a stack; moreover, an activity can be in one of several states: running, paused, stopped, destroyed. In some cases an Activity may return a value to the previous activity; for example the activity AcquireSound (4a) lets the user to select a sound to be given as a result (keyword withanswer in action 2a) to the calling activity FindSound (4b). A conventional Android programmer must known all the behavioral details related to the activity component since the code of an activity must be structured according to its state model. To help users in this task, the ADT plugin already includes automatic generation of artifacts; for example it generates a very simple skeleton class for the main-activity, a XML file for the GUI layout, the manifest file AndroidManifest.xml, and an utility class that defines constants useful to refer to application resources. However, no support can be given by the ADT to structure the code of an activity so to take into account other relevant aspects of the system, e.g., the interaction dimension. Thanks to the AAASL specification, it is instead possible to provide an activity class with the application-specific interactions methods described in the previous section. For example, the generated code for our main activity is: public abstract class FindSoundSupport extends BaseActivity { protected void oncreate(bundle savedinstancestate) { super.oncreate(savedinstancestate); setcontentview(r.layout.findsoundinfoactivitylayout); // Create user cmd buttons... }//oncreate // Interactions protected void Call_AcquireSound() throws Exception {...} protected void Exec_sendSound() throws Exception {...} } The class BaseActivity is a predefined class that extends the Android Activity class, by implementing utility methods to build message handlers, message 36

47 Title Suppressed Due to Excessive Length 9 notifiers, connection to services and so on. Since our main activity calls an action that can give an answer, the code generator creates into FindSoundSupport also an answer handler: protected void onactivityresult(int reqcode,int rescode,intent data){ super.onactivityresult(reqcode, rescode, data); if (reqcode == ActionCode.acquireSoundActionCode) //TODO; } The //TODO comment means that the specification of the method is not complete; thus the programmer has to write a specialized version of it in the concrete FindSound class that extends the abstract schematic class FindSoundSupport. Our generator provides also a basic (business-related) activity behavior that consists in looking at the input intent and in performing the associated action: protected void lookatinput() { //in xxxsupport Intent i = getintent(); if (i!= null && i.getaction()!= null) //Check the nature of the activity input and work if (i.getaction().contains("action.main")) dojob(); else execaction(i); } The method can be called when the activity is resumed (i.e. by the onresume method). It looks at the input intent; if this intent is launched by the platform (i.e., its action is action.main) the user-defined local method dojob is called; otherwise the activity calls the local method execaction that performs the action required by the given intent. The method execaction is generated from the AAASL specification for all the activities that declare to perform some action. For example the generated AcquireSoundSupport class includes: protected void execaction(intent input) { //in AcquireSoundSupport if (input.getaction().equals("getsound")) getsound(); } protected abstract void getsound() { Intent result = new Intent(); //TODO result.setdata(makeuri("msgcontent://it.unibo.msg/",//todo)); setresult(activity.result_ok, result); finish(); } Since the method getsound is only partially specified, it must be overwritten in the user-defined class AcquireSound that extends AcquireSoundSupport. However, the structure of the generated code helps the user in remembering that the finish operation, inherited from the Activity class, removes the activity from the history stack: without executing it no answer is sent to the caller. 37

48 10 Antonio Natali and Ambra Molesini Services. An Android Service is a component that is long-lived and runs in the background without a UI; it can be local to an application or remote, i.e., running in a process different form the application s process. A remote service must expose an interface to be used by any client to communicate with it, after having acquired a connection. In AAASL a service interface is mandatory and it is specified as a set of provided operations, while a service is specified as the implementation of a service interface. In our application, the service TcpSender (3d) of interface ITcpSender (3b) performs a TCP-based communication with an external server, by providing (3c) a set of operations (3a) to connect with a remote server, to send and receive information and to disconnect. In AAASL, a service is considered remote if the specification does not include the declaration of the service but includes a service interface only. Since Android components must be explicitly declared, the M2C transformers generate a proper declaration of local services in the AndroidManifest.xml. In Android, the interaction with a service is still based on intents, but now it is quite similar to a message-based interaction of type request-response. As in the case of activities, an interaction with a (local) service can be explicit, i.e., the client must know the service implementation class, or implicit, i.e. the client must know the service interface only. In our application, the activity SoundInfo declares (useservice clauses, 4c2) an implicit form of interaction with the service ITcpSender. Explicit service interaction is avoided. The generators related to the Service concept transform the AAASL specification of the service interface into a AIDL specification that is used by the ADT plugin to generate a service Stub; the stub provides different implementations for local and remote services (for them a Stub.Proxy is generated). For each service we create also a class with the same name as the service; this class extends the class and provides (as done for activities) default versions of methods related to the various service states (oncreate, ondestroy, onbind, onunbind, etc); moreoever it defines all the service operations, by delegating them to a service binder that extends the ADT-provided Stub. The binder must be written by the end-user; it must have a name of the form UDxxx, wherexxx is the name of the service declared in the AAASL model. The service lifecycle can now be perceived by the programmer as follows: A component (like SoundInfo) that needs a service of interface ITcpSender attempts to acquire a connection to an implementation service by executing: sc_itcpsender = new ITcpSenderConnection(); connecttoservice("itcpsender", sc_itcpsender); where ITcpSenderConnection is a utility class that implements the Service- Connection interface; it provides also the method getobjservice to get the current connection (if any) to the service. The (generated) method connecttoservice, starts the service and calls the bindservice method defined by the class: 38

49 Title Suppressed Due to Excessive Length 11 As a consequence of the execution of bindservice, the methods oncreate, onstartcommand and onbind are executed. As a result, the user must only known that the ITcpSenderConnection object is what is needed to call one of the operations of the service. Thus, the interaction between the SoundInfo activity and the service of interface ITcpSender is implemented in the class SoundInfoSupport as a application-specific method that assumes the following form: protected void Exec_sendALine(ITcpSenderConnection c,string m){ ITcpSender objconn = c.getservice(); if (objconn!= null) objconn.sendaline(m); } 4 Conclusions The task of capturing with a custom language all the possible mechanisms embedded into a evolute platform is difficult; in our opinion this effort should be mainly done by the the platform designers themselves. In the meantime, some software designer could take this possibility into account, since the usage of DSL Eclipse technologies allowed us to obtain several advantages: i) the learning phase of a new platform is reduced; ii) a common vocabulary can be used to focalize the attention on system architecture and on business logic rather than on computational details; iii) custom IDEs, more related to specific application needs, can be easily built, even by extending those already available; iv) the process development can be improved with architecture-based artifacts and the development time can be reduced, since a custom IDE can build in automatic way the schematic part of applications; v) the quality of the product is increased, since platform-dependent, critical aspects of software can be managed (once for all) by expert people; vi) high-level specifications can play the role of models that can be validated also by the semantic point of view, e.g., by using the Check language included in the XText suite [2]. In summary, we can state that the approach described in this work is effective, is in the scope of current software production technologies and is within the reach of even small teams. Describing system architectures with formal languages is not a new idea; various communities already recommend using Architecture Description Languages (ADLs) [11]. But the main goal here is not to define yet another general purpose language for describing architectures, but rather to build our own language to build applications, by exploiting the concepts provided by some reference platform. However, this approach is not limited to platform-specific design only. It can also promote platform-independent design, with particular reference to a critical aspect of any distributed application: the logical interactions among components. In fact, the explicit visibility in our reference application of the TCP service can be considered a too-fine detail, more related to an implementation architecture rather than to a logical design. A more abstract specification should 39

50 12 Antonio Natali and Ambra Molesini state something like: SoundInfo demands a request to an external entity named SoundServer that works using the TCP protocol. The necessity of using of a TCP support should be deduced from such a new specification, and the introduction of a service like the TcpSender should be automatically done (by hiding the service into the schematic part) by some model transformer. The way is also open to face the emergent requirements of software architecture modeling, like those described in [12] and a future work could consider the possibility to map platform abstractions into a core set of architectural concepts. But, if we assume this more general point of view, then a new, perhaps more important question, immediately arises at application level: do have the logic interaction patterns provided by our reference platform expressive power enough for our application domain? In the case of Android our answer is no, since protocols are mechanisms and the request-response pattern is just one of the possible message-passing patterns. But with DSL technologies, it is not difficult to include into a language such as AAASL the concepts described in [13] (that extend the vocabulary of interaction with terms as dispatch, invitation, signal) by delegating to some M2M mapping the task of implementing these concepts on an abstract core infrastructure that can in its turn mapped into some specific platform. We have already faced this kind of generalization, and these extensions will be described in a forthcoming paper. References 1. Android. (2010) 2. The Eclipse Foundation: XText site: Home page. (2010) 3. Android : Adt. (2010) 4. Voelter, M.: Architecture As Language. Software, IEEE (2009) 5. Android: Glossary. (2010) 6. Steinberg, D., Budinsky, F., Paternostro, M., Merks, E.: EMF: Eclipse Modeling Framework, 2nd Edition. Addison-Wesley Professional (2009) 7. Stahl, T., Voelter, M.: Model-Driven Software Development. Wiley (2006) 8. Shazam: Shazam site. (2010) 9. Natali, A.: The AAASL Language. Internal report DEIS (2010) 10. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley (1995) 11. Medvidovic, N., Taylor, R.: A classification and comparison framework for software architecture description languages. IEEE TSE 26 (2000) 12. Di Ruscio, D., Malavolta, I., Muccini, H., Pelliccione, P., Pierantonio, A.: Developing next generation adls through mde techniques. In: ICSE 10: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, New York, NY, USA, ACM (2010) Natali, A., Molesini, A.: Towards model-driven communications. In Ardil, C., ed.: World Academy of Science, Engineering and Technology. Volume 64., Rome, Italy, Academic Science Research (2010) International Conference on Software Engineering and Technology (ICSET 2010). 40

51 Design and Development of an Extensible Test Generation Tool based on the Eclipse Rich Client Platform Angelo Gargantini 1 and Gordon Fraser 2 1 Università degli Studi di Bergamo, Italia 2 Saarland University Computer Science Saarbrücken, Germany Abstract. In aspiration of automated software testing, a common task is the derivation of test cases from models. The wealth of dierent test criteria, model formalisms, and testing strategies makes reusability of such test generation tools a very challenging task. Leveraging the exibility oered by the Eclipse Rich Client Platform, we present a new test generation tool that achieves reusability by abstracting from specic details of the test generation, and by matching these features with Eclipse extensions. The resulting tool allows the conguration of dierent backends for extracting tests from models, input languages, test strategies, and test criteria via plug-ins. 1 Introduction Software testing remains the prevalent method to determine and to improve the quality of software. As exhaustive testing is impossible, coarser test criteria are used in practice. Because these criteria can only give an intuition of quality but never prove the absence of errors, researchers have come up with a great number of dierent criteria with dierent characteristics. It is possible to interpret most of these criteria in a common framework that allows leveraging the power of modern model checking tools as vehicles for test generation: A test criterion is represented as a set of temporal logic formulas, for which the model checker can eciently derive witnesses and counterexamples, serving as test cases. This oers a nice theoretical framework for test generation, but to instantiate this framework in practice requires a number of decisions: input language, test strategy, model checking tool, test criteria a tool that commits to a particular choice of these decisions is likely to be unusable for many other practical applications. Clearly, there is a need for an extensible tool for test generation that allows customization with respect to many dierent aspects. In this paper we present ExTGT (EXtensible Test Generation Tool), a test generation tool that can be customized in terms of plug-ins, thus allowing adaptation to many dierent practical settings. The implementation of ExTGT leverages from the possibilities 41

52 oered by the Eclipse Rich Client Platform. It oers a powerful environment which denes a set of extension points so building a family of test generation tools, each dening its own extensions as plug-ins for ExTGT. In the classical Eclipse style, ExTGT itself is a collection of extensions both for the Eclipse platform and for its own extension points. This paper is organized as follows: Section 2 gives a general overview of model based testing based on model checking techniques. Section 3 contains the details of the implementation of the Eclipse based test generation tool, and shows how ExTGT has been extended. Finally, Section 4 concludes the paper and gives an outlook on how ExTGT will be further extended in the future. 2 Model based Test Generation by Model Checking ExTGT is a model based testing tool: A test model describes the desired behavior, and test case generation means sampling execution paths of this test model. To perform this test generation, ExTGT uses model checking. A model checker is a formal verication tool which takes as input a model and a property, and then exhaustively veries whether the property holds on the model or not. A nice feature of model checkers is their ability to derive counterexample and witness sequences these sequences are essentially test cases. While model checking is an industrially accepted technique in the hardware domain, it is also very useful in the software domain. Model checking can be used to automatically prove certain properties on programs, but this is not sucient to replace software testing in general. The applicability of model checking is limited by the scalability of the chosen model checking techniques, and so verication of software is currently limited to small programs. Model checking as part of a software development cycle is often more practical when applied to more abstract artifacts such as models and formal specications. However, proving correctness of such an artifact with regard to certain properties does not automatically prove correctness of the actual system. Therefore, model checking cannot replace software testing. This is even true if exhaustive verication of a program's source code is possible the actual system depends on many additional factors such as the compiler, the hardware and software environment, other components, etc. Consequently software testing cannot be replaced by model checking. Counterexample generation is one of the most useful aspects of model checkers in practice. From a software testing point of view, counterexamples essentially are test cases. Mapping counterexamples to automatically executable test cases is usually a straight forward task, although the exact details depend on the system under consideration. Callahan et al. [7] and Engels et al. [13] initially proposed to use model checkers to automatically generate test cases. Many dierent techniques to systematically derive test cases with model checkers have been proposed since then. A large body of work considers test case generation based on coverage criteria. Here, the idea is to represent each coverage item described by the criterion as a temporal logic property (trap property or test predicate), such that any counterexample to the property is a test case that covers the underlying 42

53 Test suite Test predicates Test generator Testing criteria Test predicate/ Trap property File parser ASM Specification Model checker test Fig. 1. Test Generation by Model Checking: A specication is parsed and serves as source for both, test predicates and model for the model checker. Test generation is performed by querying the model checker with the model and one test predicate at a time. coverage item. Specication based structural coverage criteria are dominant in the literature [21, 22, 28], although some work has also been done on property based coverage criteria [17,31,34], and Hong et al. [25] describe control ow and data ow coverage criteria. Combinatorial coverage, where test cases for all pairs or tuples of input/state variables are required, is considered in [8, 26]. A similar approach is possible using mutation: In general, mutation describes a process where small changes are introduced in software artifacts in order to measure the quality of existing test sets, or in order to help generating new test cases that can detect these changes. Mutation can be applied to model checker specications in order to automatically generate mutation adequate test suites. This has originally been proposed by [1, 3], and recently been rened by [27] and [20]. Mutation is sometimes also applied to models in order to force counterexample generation with respect to safety properties [2, 16]. Testing with software model checkers has been considered by [5], who use the model checker Blast [23] to create test cases from C code. Test cases can be generated with regard to predicates (i.e., safety properties), and locations in the source code. Consequently, it is possible to derive test cases for code-based coverage criteria. [33] use the Java PathFinder [32] model checker to derive test cases in a similar manner. 2.1 Test generation process The test generation process employed by ExTGT is depicted in Figure 1: A specication le containing the Abstract State Machine [6] is read by the tool component parser. Starting from the ASM specication, according to the testing 43

54 criteria dened, the tool builds a set of test predicates. Note that automated test case generation requires formalization of the test objective (e.g., satisfaction of a coverage criterion), which can be seen as a set of test requirements (e.g., one test requirement for each coverable item). Each test requirement can be formalized as a temporal logical test predicate. Therefore, every testing criterion produces in practice a list of test predicates; this step can easily be automated. The test generator takes a test predicate at the time (following an ordering policy [15] or following user requests), builds the trap property, and call the model checker to get the counter example for that trap property. The counter example is converted to a test and given back to the test generator which extracts other useful information (for example the coverage of other test predicates). The test generation process by model checking is iterated until the desired test suite satises all feasible coverage goals. Similarly, in fault based testing, test predicates are generated from the original conditions by applying mutation operators [20]. In this approach, the testing criteria denes a list of test predicates for every mutation operator. 3 Design and Development of an Extensible Test Generator Tool Eclipse RCP allows developers to use the Eclipse platform to create exible and extensible desktop applications. Eclipse itself is built upon a plug-in architecture and plug-ins are the smallest deployable and installable software components of Eclipse. This architecture allows the Eclipse applications to be extended by third parties. Eclipse RCP provides the same modular concept for stand-alone applications. An Eclipse RCP application can decide to use parts of the components provided by Eclipse, like editor, menus and so on. It is even possible to design headless Eclipse based applications which use only the Eclipse runtime. Eclipse provides the concept of extension points and extensions to facilitate that functionality can be contributed to plug-ins by further plug-ins plug-ins which dene extension points open themselves up for other plug-ins. Such an extension point denes a contact how other plug-ins can contribute. The plug-in which denes the extension point is also responsible for evaluating the contributions. Therefore the plug-in which denes an extension point both denes the extension point and has some coding to evaluate contributions of other plug-ins. A plug-in which denes an extension contributes to the dened extension point; the contribution can be done by any plug-in. Contributions can be code but also data contributions, e.g., help context. Extensions to an extension point are dened in the plug-ins via the le plugin.xml using XML. The ExTGT core is itself a plug-in which provides several extensions to standard Eclipse components, like view, perspectives, menus, editors and so on. Moreover, ExTGT denes several extension points, presented in Section 3.1, which allow the addition of new functionalities, and it provides several extensions of the dened extension points as explained in Section 3.2. This setting is depicted in Fig. 2 44

55 Extensions Extension points Test generation features EXTGT Menus, view,... eclipse Fig. 2. ExTGT extensions and extension points: ExTGT denes its extension points based on common features of test generation tools. ExTGT is itself a plug-in to Eclipse and provides several extensions to standard Eclipse components such as views, perspectives, etc. 3.1 Extension points ExTGT denes several extension points (see Figure 2), which are briey explained in this section. In our approach each extension point has a reference abstract class or interface which is required to be extended or implemented by the extension of that extension point. The extension points are as follows: extgt.asmspecreader: This extension point allows to introduce several parsers for ASM specications. Indeed, although the Abstract State Machine formalism is precisely dened in theory [6], several dialects exist for writing ASMs, and the designer can add the capability to read new formats by extending the class AsmSpecReader and introducing new parsers. extgt.coveragebuilders: This extension point allows to dene new coverage criteria. A coverage criterion is dened by the interface AsmCoverageBuilder which, given an ASM specication, builds a list of test predicates. extgt.faultexpression: This extension point allows to introduce new mutation operators, which must extend the FaultExpressionVisitor class. Every extension must dene a method which takes an expression e and returns the possible faulty implementations of e. extgt.generatormethod: New test generator methods can be introduced by extending the class TestGeneratorMethod. A TestGeneratorMethod is able to generate a test sequences starting from a test predicate by exploiting the model checking counter example feature. 45

56 3.2 Extensions ExTGT provides a set of extensions for the extension points introduced in Section 3.1 (see Figure 2): The extension point extgt.asmspecreader has been extended by AsmetaL- Loader in order to read Asmeta specications as dened by Gargantini et al. [19], and by AsmGoferLoader to read AsmGofer [29] specications. extgt.coveragebuilders has been extended by BasicRuleVisitor, CompleteRuleVisitor, RuleUpdateVisitor, and MCDCCoverage which compute the classical structural coverage as dened by Gargantini and Riccobene [18]. Combinatorial testing is introduced by the extension PairwiseCovBuild [9], while fault based testing is dened by the Pluggable- FaultBasedCov extension [20]. extgt.faultexpression has been extended by all the faults dened by Gargantini [20], which are: [ENF] Expression negation fault (it consists in replacing a sub expression with its negation), [LNF] literal negation fault, [MLF] missing literal fault, [ST0/1] Stuck at 0 or at 1 fault, [ASF] Associative Shift fault, and [ORF] Operator Reference fault. extgt.generatormethod is extended by several plug-ins, each implementing the test generation by a dierent model checker. ExTGT currently supports Spin [24], an explicit state model checker, the BDD and SAT based model checker SAL [11] (only for combinatorial testing), HySAT [14], a satisability checker for Boolean combinations of arithmetic constraints over real- and integer-valued variables which can also be used as a bounded model checker for hybrid (discrete-continuous) systems, and the Yices [12] SMT solver. 3.3 Architecture ExTGT is built upon ATGT, a tool for test generation we have developed over the last years [4]. ATGT is compound of several packages as shown in Figure 3, and it comes in two variants: atgt_cli for a command line version, and atgt_swing, which oers a graphical user interface. Although ATGT was developed well before ExTGT, it already uses several projects, each dening a functionality. In terms of packages,extgt only needed a new package extgt_- rcp, which denes the RCP application but which re-uses all the code already implemented by ATGT. Note that many graphical elements in ExTGT were reused from atgt_swing by using the Swing-SWT bridge. 3.4 Testing Besides the usual JUnit tests for functional testing, to test ExTGT we use SWTBot [30]. SWTBot is an open-source Java based UI/functional testing tool for testing SWT and Eclipse/RCP based applications. SWTBot provides several APIs that are simple to read and write. The APIs also hide the complexities involved with SWT and Eclipse. Furthermore, SWT- Bot provides its own set of assertions that are useful for SWT. 46

57 extgt_rcp Legend component= eclipse project atgt_swing atgt_cli depends atgt structural mcdc fault spin hysat yices SAL Coverage criteria Model checkers tgt_lib Fig. 3. ExTGT projects: ExTGT reuses many of the components of the ATGT code base [4]. These components mainly cover dierent model checkers and test criteria. Listing 1. A snippet of an SWTBot test case as was used to test ExTGT. // t e s t t h e opening o f a f i l public void openfile ( ) throws Exception { SWTBotMenu f i l e m e n u = bot. menu( " F i l e " ) ; SWTBotMenu openm = f i l e m e n u. menu( "Open" ) ; openm. c l i c k ( ) ;... } The SWTBot tests were dened in a separate project, each in a dierent Java class. The methods to test the application consists in a sequence of commands simulating the use of the application. For example, Listing 1 illustrates a fragment of a test that opens a specication le: 3.5 ExTGT at work A screenshot of ExTGT is presented in Figure 4. The user is not aware that ExTGT is an Eclipse-based application since it is not an Eclipse plug-in but a real stand alone application and has limited reuse of the classical Eclipse workbench elements. The user loads the ASM specication in the tool. For instance, Listings 2 reports the specication of a Cruise Control system in AsmetaL. 47

58 Listing 2. AsmetaL Specication of a Cruise Control // t h e c r u i s e c o n t r o l module asm c r u i s e C o n t r o l import Stand ardlibrary signature : // d e c l a r e u n i v e r s e s and f u n c t i o n s enum domain CCMode = {OFF INACTIVE CRUISE OVERRIDE} enum domain CCLever = {DEACTIVATE ACTIVATE RESUME} dynamic controlled mode : CCMode dynamic monitored l e v e r : CCLever dynamic monitored igon : Boolean dynamic monitored engrun : Boolean dynamic monitored brake : Boolean dynamic monitored f a s t : Boolean definitions : // AXIOMS axiom i n v _ i g n i t i o n over engrun : ( engrun i m p l i e s igon ) axiom i n v _ t o o f a s t over f a s t : ( f a s t i m p l i e s engrun ) // Rules main rule r_ CruiseControl = i f not igon then mode := OFF e l s e i f not engrun then mode:= INACTIVE // igon and engrun else par i f mode = OFF then mode := INACTIVE endif i f mode = INACTIVE and not brake and not f a s t and l e v e r = ACTIVATE then mode := CRUISE endif i f mode = CRUISE then i f f a s t then mode := INACTIVE else i f brake or l e v e r = DEACTIVATE then mode := OVERRIDE endif endif endif i f mode = OVERRIDE and not f a s t and not brake and ( l e v e r = ACTIVATE or l e v e r = RESUME) then mode := CRUISE endif endpar endif endif // i n i t i a l s t a t e default i n i t s1 : function mode = OFF function l e v e r = DEACTIVATE function igon = f a l s e function engrun= f a l s e function brake = f a l s e function f a s t = f a l s e 48

59 Fig. 4. Screenshot showing ExTGT in action: The coverage view shows the available test predicates, and ExTGT also shows the specication and details of the currently selected test predicate. Fig. 5. Choosing a model checker plug-in in ExTGT: The user does not need to be aware of the extension points and extensions, but gets to see the eects in various options. The user does not need to know which plug-ins are installed, but the application is aware of the extensions dened by its extension points. For example, the choice of the model checker is presented by the user by means of the simple pull-down menu shown in Figure 5. In this case the application searches the list of the extensions dened as extgt.generatormethod and builds the list of the available methods to be proposed to the user. After selecting a test generator method and the desired test predicates, the user can run the test generator of ExTGT to obtain the desired test cases. 4 Conclusions and Future Work The Eclipse RCP framework has allowed us to use the Eclipse platform to create a exible and extensible test generator tool, reusing some of the code we previously developed for ATGT. We plan to dene new extensions, like the use of 49

60 further model checkers such as NuSMV [10] as test generator method. We plan also to dene new extension points, for example the ordering policies in which test predicates are taken by the test generator. A major extension point we are planning to dene is the specication notation to make ExTGT able to read not only ASM specications, but also other formal notations like SCR or UML state machines. Acknowledgments Angelo Fumagalli and Matteo Foiadelli developed the initial prototype of EXTGT. Laura Bottanelli developed the hysat module. References 1. Paul Ammann and Paul E. Black. A Specication-Based Coverage Metric to Evaluate Test Sets. In HASE '99: The 4th IEEE International Symposium on High-Assurance Systems Engineering, pages , Washington, DC, USA, IEEE Computer Society. 2. Paul Ammann, Wei Ding, and Daling Xu. Using a Model Checker to Test Safety Properties. In Proceedings of the 7th International Conference on Engineering of Complex Computer Systems (ICECCS 2001), pages IEEE, Paul E. Ammann, Paul E. Black, and William Majurski. Using Model Checking to Generate Tests from Specications. In Proceedings of the Second IEEE International Conference on Formal Engineering Methods (ICFEM'98), pages IEEE Computer Society, ATGT Abstract State Machines test generation tool project. it/gargantini/software/atgt/. 5. Dirk Beyer, Adam J. Chlipala, Thomas A. Henzinger, Ranjit Jhala, and Rupak Majumdar. Generating Tests from Counterexamples. In Proceedings of the 26th International Conference on Software Engineering (ICSE'04, Edinburgh), pages IEEE Computer Society Press, E. Börger and R. Stärk. Abstract State Machines: A Method for High-Level System Design and Analysis. Springer Verlag, John Callahan, Francis Schneider, and Steve Easterbrook. Automated Software Testing Using Model-Checking. In Proceedings 1996 SPIN Workshop, August Also WVU Technical Report NASA-IVV Andrea Calvagna and Angelo Gargantini. A Logic-Based Approach to Combinatorial Testing with Constraints. In Tests and Proofs, volume 4966 of Lecture Notes in Computer Science, pages Springer-Verlag, Andrea Calvagna and Angelo Gargantini. A formal logic approach to constrained combinatorial testing. Journal of Automated Reasoning, Alessandro Cimatti, Edmund M. Clarke, Fausto Giunchiglia, and Marco Roveri. NUSMV: A New Symbolic Model Verier. In CAV '99: Proceedings of the 11th International Conference on Computer Aided Verication, pages , London, UK, Springer-Verlag. 11. Leonardo de Moura, Sam Owre, Harald Rueÿ, John Rushbyand N. Shankar, Maria Sorea, and Ashish Tiwari. Sal 2. In Rajeev Alur and Doron Peled, editors, Computer-Aided Verication, CAV 2004, volume 3114 of Lecture Notes in Computer Science, pages , Boston, MA, July Springer-Verlag. 12. B. Dutertre and L. de Moura. The Yices SMT solver. Technical report, SRI Available at

61 13. André Engels, Loe Feijs, and Sjouke Mauw. Test Generation for Intelligent Networks Using ModelChecking. In Ed Brinksma, editor, Proceedings of the Third International Workshop on Tools and Algorithms for the Construction and Analysis of Systems. (TACAS'97), volume 1217 of Lecture Notes in Computer Science, pages , Enschede, the Netherlands, April Springer-Verlag. 14. Martin Fränzle and Christian Herde. Hysat: An ecient proof engine for bounded model checking of hybrid systems. Form. Methods Syst. Des., 30(3):179198, Gordon Fraser, Angelo Gargantini, and Franz Wotawa. On the order of test goals in specication-based testing. Journal of Logic and Algebraic Programming, 78(6):472490, July Gordon Fraser and Franz Wotawa. Property Relevant Software Testing with Model- Checkers. SIGSOFT Software Engineering Notes, 31(6):110, Gordon Fraser and Franz Wotawa. Complementary criteria for testing temporal logic properties. In Catherine Dubois, editor, Proceedings of thethird International Conference on Tests And Proofs (TAP), volume 5668 of Lecture Notes in Computer Science, pages 5873, Zurich, Switzerland, Springer. 18. A. Gargantini and E. Riccobene. Asm-based testing: Coverage criteria and automatic test sequence generation. JUCS - Journal of Universal Computer Science, 7(11): , nov A. Gargantini, E. Riccobene, and P. Scandurra. A metamodel-based language and a simulation engine for abstract state machines. Journal of Universal Computer Science (JUCS), 14(12): , Angelo Gargantini. Using model checking to generate fault detecting tests. In International Conference on Tests And Proofs (TAP), Zurich, Switzerland on February 2007, volume Lecture Notes in Computer Science (LNCS), pages , Angelo Gargantini and Constance Heitmeyer. Using Model Checking to Generate Tests From Requirements Specications. In ESEC/FSE'99: 7th European Software Engineering Conference, Held Jointly with the 7th ACM SIGSOFT Symposium on the Foundations of Software Engineering, volume 1687, pages Springer, Gregoire Hamon, Leonardo de Moura, and John Rushby. Generating Ecient Test Sets with a Model Checker. In Proceedings of the Second International Conference on Software Engineering and Formal Methods (SEFM'04), pages , Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, and Gregoire Sutre. Software Verication with Blast. In Model Checking Software: 10th International SPIN Workshop, Portland, OR, USA, May 9-10, Proceedings, pages Springer-Verlag, Gerard J. Holzmann. The Model Checker SPIN. IEEE Trans. Softw. Eng., 23(5):279295, Hyoung S. Hong, Insup Lee, Oleg Sokolsky, and Hasan Ural. A Temporal Logic Based Theory of Test Coverage and Generation. In Proceedings of the 8th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2002), volume 2280 of Lecture Notes in Computer Science, pages Springer Verlag Gmbh, D. Richard Kuhn and Vadim Okun. Pseudo-Exhaustive Testing for Software. In 30th Annual IEEE / NASA Software Engineering Workshop (SEW ), April 2006, Loyola College Graduate Center, Columbia, MD, USA, pages IEEE Computer Society,

62 27. Vadim Okun, Paul E. Black, and Yaacov Yesha. Testing with Model Checker: Insuring Fault Visibility. In Nikos E. Mastorakis and Petr Ekel, editors, Proceedings of 2002 WSEAS International Conference on System Science, Applied Mathematics & Computer Science, and Power Engineering Systems, pages , Sanjai Rayadurgam and Mats P. E. Heimdahl. Coverage Based Test-Case Generation Using Model Checkers. In Proceedings of the 8th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS 2001), pages 8391, Washington, DC, April IEEE Computer Society. 29. J. Schmid. AsmGofer Swtbot - ui testing for swt and eclipse Li Tan, Oleg Sokolsky, and Insup Lee. Specication-Based Testing with Linear Temporal Logic. In Proceedings of IEEE International Conference on Information Reuse and Integration (IRI'04), pages , Willem Visser, Klaus Havelund, Guillaume Brat, and SeungJoon Park. Model Checking Programs. In ASE '00: Proceedings of the 15th IEEE international conference on Automated software engineering, pages 311, Washington, DC, USA, IEEE Computer Society. 33. Willem Visser, Corina S. Pasareanu, and Sarfraz Khurshid. Test Input Generation with Java PathFinder. In ISSTA '04: Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 97107, New York, NY, USA, ACM Press. 34. Michael W. Whalen, Ajitha Rajan, Mats P.E. Heimdahl, and Steven P. Miller. Coverage Metrics for Requirements-Based Testing. In ISSTA'06: Proceedings of the 2006 International Symposium on Software Testing and Analysis, pages 2536, New York, NY, USA, ACM Press. 52

63 A probabilistic approach for choosing the best licence in the Eclipse community Pierpaolo Di Bitonto (1), Maria Laterza (1), Paolo Maresca (2), Teresa Roselli (1), Veronica Rossano (1), Lidia Stanganelli (3) (1) Department of Computer Science University of Bari Via Orabona 4, Bari Italy (2) Dipartimento di Informatica e Sistemistica Università di Napoli Federico II Via Claudio 21, Napoli - Italy (3) DIST - University of Genoa, Italy Viale Causa 13, 16145, Genova, Italy {dibitonto, marialaterza, roselli, Abstract. Software was born closed and proprietary, because it was highly dependent on the physical specifics of the computer on which it was executed. This made the software a decisive factor in business competition. The first movement against closed, proprietary software was the Free Software Foundation (FSF), founded by Stallman in Since then, this movement has taken root, and many institutions, research centres, and communities have defined their own particular licence. The result has been an explosion of the number of licences. In order to erect a barrier against this proliferation, the OSI founded a commission, the License Proliferation Committee (LPC). Despite the efforts of the LPC, choosing the licence that best fits the specific needs of the programmers is still a very complex activity. For this reason, this contribution aims to reduce the difficulties of that choice. Starting from the work by the LPC, a Bayesian network has been defined with the aim of (1) advising the user that created a software product of a list of open source licences as possible candidates for application to software products, (2 ) describing (to the user) the reasons that led the system to offer such licences, reporting the license text clearly and completely, without neglecting the technical details. In this way users can acquire a degree of knowledge of these types of licences and develop a personal opinion about them. Keywords: ; Bayesian network; Eclipse community; open source licences. 1 Introduction Software (like other intellectual works) is protected by copyright just like literary or musical works. Unlike artworks, software is a commercial product that offers a 53

64 practical solution to real life problems. Nowadays it is everywhere, in cars, on mobile phones, in robots on assembly chains that build consumer goods and in thousands of other tools in everyday use. Software was born closed and proprietary, because it was highly dependent on the physical specifics of the computer on which it was executed. The hardware companies also produced software. This made software a decisive factor in business competition. The first movement against closed, proprietary software was the Free Software Foundation (FSF), founded by Stallman in He stated that free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. In the early '90s, the idea of free software acquired connotations that were quite different from the original ones stated by Stallman. The idea of free software overlapped with that of gratuitous software, that sometimes became synonymous of poor quality. In 1998 the Open Source Initiative (OSI) was born with the aim of ensuring the correct use of the term Open Source (OS), in order to coordinate the different open source projects. Even if OSI and FSF share the same general ideas, FSF assigns priority to improving software product quality and it does not exclude the coexistence of open source and proprietary software in the same project. The OSI has defined the main characteristics of an open source licence. Among the most popular open source licenses there are the GNU-GPL and BSD, each of which has its own peculiarities. Starting from January 2004, with the birth of the Eclipse Foundation and thanks to the diffusion of Java (the cross platform language par excellence), the open source philosophy has taken root. Today it often happens that large corporations, even if competitors, become allies to produce software based on common interests. Moreover, the open innovation network paradigm has gathered momentum; this is based on the concept of cooperative development for common goals and competition on what differentiates companies in their core business. An example of this model is that of the two giants, Nokia and Samsung, that developed together the mobile platform within the Eclipse community, but offering different services, and each applying its own type of open source licence. The involvement of different partners has led to numerous variants of the classic OS licences. The result has been an explosion of the number of licences. In order to erect a barrier against this proliferation, the OSI founded a commission, the License Proliferation Committee (LPC). Despite the efforts of the LPC, choosing the licence that best fits the specific needs of programmers is still a very complex activity. For this reason, this contribution aims to reduce the difficulties of that choice. Starting from the work by the LPC, a Bayesian network has been defined with the aim of (1) advising the user that has created a software product of a list of open source licences as possible candidates for application to software products, (2) describing (to the user) the reasons that led the system to offer such licences, reporting the license text clearly and completely, without neglecting the technical details. In this way users can acquire a degree of knowledge of these types of licences and develop a personal opinion about them. Moreover, the suggestion will be contingent on the policies for allowing the software to spread, that will be revealed through a questionnaire. In the next section an overview of open source licences is made. Section III will discuss the Bayesian network implemented to handle the uncertainty in determining the variables 54

65 that must be manipulated when choosing an open source licence. Finally, in Section 4 the conclusions and future developments will be discussed. 2 OPEN SOURCE LICENCES ANALYSIS In order to establish a support system for choosing the open source licence that best suits the users requirements, the various types of licences, starting from the LPC classification, have been studied. The main goals of the study are first of all to understand the basic characteristics of each licence, and the OSI licence proliferation problem, secondly to define the Bayesian network best suited for this purpose. 2.1 OSI licenses classification According to the study by the LPC, there are three main problems caused by the proliferation of licences: (1) too many different licenses make it difficult for the licensor to choose among them the one that best fits her / his needs; (2) some licences do not play well together, (3) too many licences makes it difficult to understand what you are agreeing to in multi-license distribution. The OSI was not able to solve these problems because it can only define, on the basis of the minimal requirement, if a licence is open source or not. It cannot forbid the use of any licence. All it can do is educate the licensor to use a small subset of licences and whenever possible to establish that s/he should not combine different licences. In 2005, the OSI established further restrictions on open source licences: (1) the licence should not be repetitive, (2) it must be written in a clear way and it must be easy to understand, (3) it must be reusable. With the intention of setting order in the open source scenario, the LPC has classified all licences in seven different categories: Popular: this group accounts for the most commonly used licences in the software development community Special purpose: this group includes licences suitable for the protection of particular issues; they generally relate to software products in schools or public administrations; Redundant: this group refers to the licences that totally or partly reflect a popular license Non-Reusable: this group includes licences that are not reusable because they are too specific; Superseded: this group deals with licences that have been superseded by new versions of the same license; Voluntarily retired: this group includes the licences that have been voluntarily withdrawn and are no longer used Other & Miscellaneous: this group refers to special licences resulting from mixing of different licences that do not fall into other categories. 55

66 The OSI idea is to induce users to use the licences that belong to the popular category. In general, the popularity of a licence implies that it results from a tradition in text interpretation. So, in case of disputes, this significantly reduces the risk of confusion. In common law countries, the legal interpretation of the licences is a precedent that the Courts take into account. The classification provided by the OSI is used to define the Bayesian network. Comparing similar licences that meet the same characteristics of the user, the Bayesian network chooses a popular license. Superseded and non-reusable categories are considered only if the user wants to relicense a work deriving from any of the above mentioned licences. The remaining categories (Special Purpose, Voluntarily retired, and other and miscellaneous) have been completely excluded. 2.2 License characterization In order to define the Bayesian network, more than fifty licences have been examined, including BSD, ASL (Apache Software License) [6], GPL, EPL (Eclipse Public License) [7]. Each of them has been analyzed from different perspectives: the rights and duties of the licensor (the developer of the software who decides what licence to release), rights and duties of the licensee (who wants to use the software), legal notes and formal aspect. For each of these dimensions several specific characteristics are considered, assigning a value on a scale from 1 to 5 points. In order to understand how a licence is defined, two different licences are presented. The first, GPL, is a typical example of a free software licence, the second, EPL, is a typical example of an open source licence General Public License The General Public License (GPL) is a free software licence created by the Free Software Foundation. Up to now, FSF has published three versions of GPL, all potentially adoptable to license a software product. All GPL licences also give the licensee the permission to copy, modify and distribute it with or without modifications of the software program. What distinguish GPL from other non-copyleft licenses are two properties "persistency" and "propagation. The GPL is persistent because it imposes constraints on the distribution of software that must be carried out under the terms of the GPL. Moreover it gives the freedom to distribute copies of the software, even for a fee. The basic idea is that the licencee can change the software or use its parts in new programs. Nobody can stop these freedoms. The GPL is a propagative licence because it forces the licensor of a derived work to relicense the new work again under a GPL licence. Modification and distribution of derivative works is allowed under the terms of the GPL. The new licence must include the copyright notes, and it must clearly define the changes with respect to the original version. 56

67 2.2.2 Eclipse Public License The Eclipse Public License (EPL) is a software license that was born of the open source Common Public License, an open source software license published by IBM. The purposes of EPL are to encourage and promote open source development, at the same time maintaining the ability to reuse the content under another license. Whoever receives programs licensed under EPL can use, modify, copy, distribute and modify the work, but keeping public changes and relicensing always as EPL. The EPL has some conditions that make it similar to the GPL, but there are some key differences. The EPL grants the licensor the right to decline quality assurance, it prohibits the relicensing of the modified software under other licences different from EPL; it forbids using the name and the intellectual properties of the previous licensor to promote derived works. Finally, the licensor is obliged to provide copies of source code together with the original work. while the licensee may reproduce or modify the original work, creating derivative works. S/he may grant the right to a universal license which grants all rights for the exploitation of the objects created. As regards the legal terms, EPL provides information on legal recourses, on refunding of legal costs, on the duration of legal action, but not about the use of API. Finally, in the following figure the different levels of restriction among several licences are shown. The differences between EPL and GPL can be considered Degree of Restriction MIT/X BSD Apache AFL CDDL EPL MPL CPL OSL LGPL GPL Reciprocity Reach & Relicensing Restrictions Patent Grant Figure 1. Licences degree of restriction 3. The Bayesian Belief Network definition Bayesian Belief Networks (BBN) are adopted to implement a choice among a set of variables under conditions of uncertainty. The variables are extracted from the knowledge gained from a domain analysis. The induction process uses a priori information, provided by the expert domain, which is modified by certain data (called evidence) through the use of Bayes' theorem. The probabilistic inference is obtained 57

68 through a procedure of estimating parameters and testing hypotheses. The Bayes probability interprets the concept of probability as a measure of the state of knowledge in contrast with the mathematical interpretation based on the frequency with which an event occurs [8]. One of the crucial characteristics of the Bayesian view is that a probability is a hypothesis, whereas under the classical perspective, a hypothesis is typically rejected or not rejected without directly assigning a probability. A common criticism of the definition of Bayesian is its subjectivity. The probability values that can be assigned are strictly dependent on the domain knowledge of the domain expert. So, the probability estimations are degrees of belief that must be assigned according to the consistency constraint. In order to define a BBN it is important to define the network topology identifying the domain variables and the relationships among them. This is much easier than specifying the classic probabilities associated with each variable. Once the topology of BBN has been specified, it is necessary to specify conditional probabilities for nodes that participate in the direct dependencies, and to use them in order to calculate any probability value. 3.1 Domain variables identification Network topology can be thought of as an abstract knowledge base that can incorporate a wide variety of different situations. In order to define the network topology, the first step is to choose the nodes that will construct the network. There are two kinds of nodes: observable and not observable. The first ones (indicated in Fig. 2 with a single circle) are used to introduce new evidence in the reasoning process, the others (indicated in Fig. 2 with a double circle) are used to direct the reasoning process. The network has two root nodes: history: includes basic user profiling because it introduces user information about the use of previous software licences in the network transmission derivative works: considers the needs of users that are searching for a licence that allows them to relicense derivative works. The evidence on these nodes is provided by answers to questions 2 and 3 in Appendix A. The second level of the network considers some groups of popular licences such as: BSD, GPL, MPL (Mozilla Public Licence) [9], CPL (Common Public Licence) [10], and some groups of redundant licences. Each group contains a further specialization that will allow identification of the most appropriate licence. The data serving to choose the licence group and the specific licence are gained by means of the questionnaire reported in Appendix A. The other nodes are not observable nodes, so they are defined as hidden nodes. Within each branch, in a first step the macro-groups are chosen. After concluding the analysis of the features of each group the single licence is defined. 58

69 3.2 BBN structure definition The structure of the graph requires the definition of the semantics of nodes (i.e. the identification of possible states for each node), and semantic relations, (i.e. the identification of cause-effect relationships). In particular, the cause-effect relationships are illustrated in the graph and are explained in the tables of conditional probability. The goal of the defined BBN is to define how to license an original work. In the case when the user modifies another software product, the distribution of the modified software will depend on the constraints of the licence of the original product. The identification of this requirement, like all the evidence necessary for the inference will be expressed by the answers to questionnaire reported in Appendix A. The analysis of the graph is made through the presentation of the various branches of BBN. Greater emphasis is given to the observable nodes, that allow new evidence to be included. The root node "History", has a probability distribution on seven possible states. It identifies the categories of membership of the licenses (BSD, MIT [11], MPL, CPL, GPL, LGPL [12], Prima). The node "Transmission derivative works" has a probability distribution on two states: permissive and restrictive. The node "Using the name of the licensor for advertising derivative works" has three different sub-paths: MIT licence Group, popular and redundant. If the licensor intends to transfer the right to use his name to advertise derived works, a greater probability is assigned to the MIT licence group, otherwise the probability of the BSD group is increased. Other observable nodes are those that affect the choice within the categories popular or redundant: patent, legal notes, better explanation copyright clauses, documents availability, export control. H TDW GPLG BSDG MPLG COW WWC ADW P R MITG BSD 3.0 ASL 2.0 AAL UL HPND MITL P R O&M EFL 2.0 FL 2.0 Pa LN CCE DA EC CCE DA RE 59

70 H History P Popular TDW Trasmission Derived Work GPLG GPL Group MITL APL 2.0 R Redundant AAL MIT Licence BSDG BSD Group MITG MIT Group HPND MPLG DA MPL Group Documentation Avaiability BSD 3.0 BSD licence 3.0 UL CCE Adaptive Public Licence Attribution Assurance Licence University of Illinois Historical Permission Notice and Disclaimer Copyright Clause Explanation Pa Patent O&M Other and mishellaneous EC Export Control LN Legal Notes EFL Eiffel Forum Licence 2.0 COW Commerciability Original Work FL Fair Licence RE Responsability Exoneration Figure 2. A fragment of BBN structure The GPL licence group considers also the LGPL sub-group. The network suggests this sub-group if the licensor needs to market the original work. The MPL licence group considers also the CPL sub-group. The main difference between these two groups of licences lies the possibility to extend original works by extending other works under other licences. This possibility is stronger in MPL groups than in CPL groups. 3.3 Probability value definition Once the network topology has been specified, the conditional probabilities table for each node is formalized. Each column of the table contains the conditional probability of the value of each node to a conditioning event. Figure 3 shows an example of a conditional probability table. The a priori probabilities of the root nodes Figure 3(b) and the probability of a child node BSD Group License Figure 3(c) are reported as examples. The values are defined in a subjective way, on the basis of the study reported in Section 2. 60

71 History MIT 0.15 BSD 0.15 CPL 0.15 MPL 0.15 GPL 0.15 LGPL 0.15 Prima 0.15 Trasmission of derived work Permissive 0.5 Restrictive 0.5 (a) BSD License Group Trasmission of derived work Permissive Restrictive History MIT BSD CPL MPL GPL LGPL Prima MIT BSD CPL MPL GPL LGPL Prima True False (BSD licence group) History (b) (Derived work transmission) (c) Figure 3. Conditional probability table example 4. Conclusion and future works The paper faces the problem of licence selection in an open source community. It presents a prototype of BBN that is able to support programmers in the choice of the licence that best fits her/his needs. The probabilities of BBN are defined after analysis of the open source/free software licences. 61

72 The research undertaken till now has highlighted that the introduction of a mixed methodology for assigning probabilities could offer a substantial improvement on the calculation model based on Bayesian inference. Moreover, the mush-up technique [13] should be considered in order to extract data on licences in different open source communities such as Eclipse, with the aim of automatically improving the inference on the Bayesian network. In this sense some plug-ins able to support this functionality should be studied. Further improvements could include a more appropriate user profiling, in order to gain a greater knowledge of the history of the software company that wants to license software. In this way, the BBN does not rely only on the indications of the licensor. In this context, the Bayesian Belief Networks have shown their ability to express terms in a simple set of complex relationships, thus offering a good performance. Appendix A The questionnaire reported below must be filled by the licensor. In this way the BBN acquires the information needed to suggest the licence that best fits the user's needs 1. Derivative works: Is the software that you are licensing an original work or is it licensed under other licence? (List of examined licenses / No, it's an original) 2. History: Have you ever licensed, in the past, a software product? Which software license are you using most frequently? (List of used licenses / This is the first time that I license a software product) 3. Transmission of derivative works under any license: All the rights (and duties) about the software are defined by the software licence. The licensor can chose the propagation of these rights (and duties) in derived works. Alternatively, the licensor can free the derived work from any constraints. In this way the licensor of the derived work is free to relicense the derived work without any constraints with respect to the original licence. Which kind of licence do you prefer? (a licence that allows you to license derivative works under licensing terms different from the originals ones/ a licence that requires you to license derivative works only under the same licensing terms / I do not know) 4. Using the name of the copyright holder: The name of the copyright holder (who made the original work) can be used to discredit or promote the spread of derived work. Do you want to forbid this use of the name of the copyright holder? (Yes / No / I do not know) 62

73 5. Copyright: The Open Source licences contain clauses that specify the rights to use the software (copyright). In some licenses these clauses are treated clearly and completely, in others summarily. Do you want choose a licence that clearly explains all copyrights? (Yes / No / I do not know) 6. Patents: The patentability of software is a very controversial matter. There are licences that include references to the software patents in their text. Other open source licences do not consider this aspect. Do you need a licence that faces the problem of a software patent? (Yes / No / I do not know) 7. Documentation: Do you need a licence that obliges documentation? (Yes / No / I do not know) the licensor to provide the software 8. Legal notes: Do you need a licence that clearly states the legal terms? (Yes / No / I do not know) 9. Legal Recourse: With reference to the previous question, it is necessary to make a further clarification regarding the possibility to obtain more information about the legal actions. Do you need further explanation to clarify how you should conduct legal recourses? (Yes / No / I do not know) 10. Export control: With reference to the legal terms, do you require additional explanations regarding the regulation of export of software products? (Yes / No / I do not know) 11. API Using: With reference to the legal terms, do you need to specify information about the use of the API (Application Programming Interface)? (Yes / No / I do not know) 12. Guarantee: Some open source licences contain clauses that allow you to decline warranty on the software products. In some kinds of licence, this clause is explained meticulously, but it lacks legibility; in other cases it is clear but lacks some details. Which kind of clause do you prefer? (meticulous but not always clear / concise and readable / I do not know) 13. Responsibilities: 63

74 Some open source licences contain clauses that allow you to decline responsibility for the software products as regards damage derived from the use of the software. In some licences, this clause is explained meticulously, but it lacks legibility; in other cases it is clear but lacks some details. Which kind of clause do you prefer? (meticulous but not always clear / concise and readable / I do not know) 14. Multiple license: Is it important that the licence allows you to use multiple licences in order to improve the compatibility of the code (from the legal point of view)? (Yes / No / I do not know) 15. For commercial work: Some licences allow licensees who need to modify and relicense the software, the use of a proprietary licence. Do you want to allow a licensee to make her/his business on your software? (Yes / No / I do not know) References Russell, S. J.; Norvig, P. (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN Colazzo L., Molinari A., Maresca P., Stanganelli L., Mashup learning and learning communities. In: Workshop on Distance Education Technologies (DET'2009) part of the 14th International Conference on Distributed Multimedia Systems (DMS 2009). San Francisco Bay, USA, September 10-12KSI Press, p , ISBN/ISSN: X 64

75 Eclipse Jazz Chair: Paolo Maresca Dipartimento di Informatica e Sistemistica, Università di Napoli Federico II 65

76 Adding collaboration into Rational Team Concert Furio Belgiorno, Ilaria Manno, Giuseppina Palmieri, and Vittorio Scarano ISISLab Dipartimento di Informatica ed Applicazioni R.M. Capocelli Università di Salerno, Fisciano (SA), Italy Abstract. Software Development is a team activity which requires effective collaboration among the involved people. Many systems support coordination and organization activities of software development projects, but offer poor support for the communication among the developers, which often happens in face to face chance meetings or via instant messaging systems. In this paper we present the integration of collaboration and communication functionalities coming from CoFFEE (a platform supporting collaborative learning) into Rational Team Concert (RTC), a development environment with advanced features to support coordination of software development projects. The focus of this paper is about the architecture of the integration between the CoFFEE collaborative tools and RTC, which leverages on the composability and extensibility of Eclipse and Jazz, the open source projects on which CoFFEE and RTC are based on. 1 Introduction Rational Team Concert [1] is an IBM product providing a development environment built on Eclipse [2] and Jazz [3] integrating advanced control and coordination collaborative functionalities. The choice of leveraging RTC on open source projects (Eclipse and Jazz) allows and stimulates the creation of extensions and research projects based or integrated with it (see, for examples, the list in [4], FriendFeed [5] and Ensemble [6]). Today, in Rational Team Concert the support for synchronous communication is provided through simple tools (chat) or by shared audio/video/whiteboard communication tools. What is missing, currently, is the ability for the team to (first) discuss 1 and debate (with advanced synchronous tools) during the development process by using the platform, and (second) to be able to include the discussions into the project repository in such a way that it builds into the team knowledge. In this paper we report on the status of a project whose goal is to integrate tools in RTC to support synchronous collaboration among the developers through structured shapes of communication. The communication tools come from CoFFEE [7 9], a suite of applications designed to support face-to-face learning and providing an extensible set of collaborative tools which offer several different shapes of shared collaborative space. 1 RTC offers limited support for communication, since only simple instant messaging, interfaced with existing Lotus SameTime or Jabber server, is available. 66

77 In [10] we presented the rational of the project and the effect of integrating advanced communication mechanisms into a Collaborative Development Environment. In that context, we presented the effects of the integration in RTC of CoFFEE tools, by emphasizing the collaboration functionalities and the advantages of a mechanism, embedded into the RTC environment, to record the content produced in the collaborative session, so that it is available to the developers in the future. Our thesis was that the ability to trace back the discussion that originated a design decision is an important tool for leveraging on the group knowledge more effectively and efficiently. In this paper we describe the integration of CoFFEE and RTC from a technical point of view, with a particular focus on the architecture. We want also stress that the integration between CoFFEE and RTC has been possible thanks to the choice of both the systems to leverage on open source platforms (Eclipse and Jazz), confirming that the extensibility and composability of these open source systems are key points to develop new features. In the following we first describe the motivations to our work, by relating to some existing tools, then provide a brief description of Rational Team Concert and of its main functionalities. Then we describe the details of the integration between CoFFEE and RTC and conclude with some final comments. 2 The motivations to our work The software development implies inherently collaborative activities which involves teams whose size can go from few to many persons; often several teams are involved as responsible of different aspects or phases of the projects. The members of each team can be co-located in the same work place or distributed around the world and, in the same way, the various teams can be distributed around the world. In every case, the tools to support the collaboration and the communication among the people in a team and among the teams are becoming more and more important, as both the academic and industrial research show [11 14]. Carmel and Agarwal [15] defined three kinds of collaborative activities influenced by the distance in a development team: coordination, control and communication. Coordination is the act of integrating the tasks of the various teams, so each team contributes to the overall objective. Orchestrating the integration often requires intense and ongoing communication. Control is the process of adhering to goals, policies, standards, or quality levels. Communication is a mediating factor affecting both coordination and control. It is the exchange of complete and unambiguous information that is, the sender and receiver can reach a common understanding. As described in [16], most of the existing systems, such as Sourceforge, GForge and TRAC, integrated with Development Environments aim to support mainly the collaboration in the coordination and control activities. All these systems, SourceForge, GForge and TRAC, are web-based and provide functionalities to support coordination and control over the development activities, such as mailing lists, discussion forums, tracking systems and source management systems like CVS or SVN. However, none of them provides support to synchronous communication, presence and task awareness, that is, do not provide support to team awareness. 67

78 These systems lack of support for the communication which happens in informal way in software development teams, that is the unplanned communication among developers to support lateral activities (such as filling in implementation details, correct mistakes, debugging). Several analysis [17, 18] indicate that the informal communication is critical to the success of software development: on one hand it allows an easy and quick exchange of useful information, on the other hand it is not traced in the classical systems supporting the software development process. Indeed, such communication uses informal channels that are not traced in the development environment. The unplanned communication in co-located teams happens often in face-to-face chance meetings, for example at the coffee machine or at lunch. In distributed team the informal communication often happens through or instant messaging. In both the scenarios the information exchanged is out of the tracking system, and the information is just somewhere in the mind of the developers and maybe in their mailbox or IM logs. This means that the information is not easily accessible for further needs. From this consideration rises the necessity to integrate as much as possible the informal communication in the development environment. The main examples of research to integrate communication tools within the development environments [19], are CollabVS and Rational Team Concert, which integrate also communication functionalities in the development process. CollabVS [20] is an extension of Visual Studio which provides functionalities to support collaboration: it offers real-time presence information, audio/video channel and shared whiteboard to support users in information exchanges. 3 Rational Team Concert Rational Team Concert (RTC) is a development environment which provides a wide set of functionalities to support the coordination of software development projects, including the management of teams, the organization of the project in areas, the management of work plans and deadlines, the source control and many other functionalities. It offers several interfaces: an Eclipse based client, a Microsoft Visual Studio client and a Web interface. The clients integrated in the development environments allow the developers to access all the services provided by the server of RTC from within the work environment. The Web interface supports the administration of the projects and allows to access the content of the repository such as information, tasks, project areas. RTC leverages on Jazz, an open source platform developed by IBM. Jazz is a scalable, extensible team-collaboration platform that integrates tasks across the software lifecycle. The platform is designed in building blocks aiming to facilitate the creation of new products and tools based on Jazz. Jazz includes an extensible repository that provides a central location for toolspecific information. The repository stores data as top-level objects called items. The repository includes auditable item types, which maintain a history of an item s creation and subsequent modification for audit purposes. Jazz defines the project area as the representation of a software project: it includes the project deliverables, team structure, process and schedule. The project area is stored in the repository as a top-level item and references the artifacts related to the project (i.e. 68

79 Fig. 1. The applications of the CoFFEE suite and the relationships among them. the artifacts are all the object created for the management of the project, like the plans, the work items and so on. The access to the project area and to the artifacts is regulated by permissions. A complex project can have several active lines of development, known as timelines, and each timeline has its deliverables, teams, schedules and so on. The control of the work process is formalized into the Project process, which defines the collection of roles, practices, rules, and guidelines used to organize the flow of work. Each project area has a project process, which can be customized for each team, by defining user roles and their permissions. 4 CoFFEE tools in RTC The integration of CoFFEE tools with RTC provides developers with a variety of tools to support several shapes of collaboration. Moreover, the integration has been implemented so that the content of the tools will be recorded in the repository provided by Jazz. This allows to re-load and re-open previous discussions. This is a key point to ensure that any collaborative act happened in CoFFEE will be saved and will be available for further uses. Here we are going, first, to describe CoFFEE and then to show how CoFFEE tools are integrated in RTC and some example of the usage. 4.1 CoFFEE CoFFEE (Collaborative Face-to-Face Educational Environment) is a set of Rich Client Applications [9] designed to support the learning through argumentation and debates in classroom. The main applications are the CoFFEE Controller and the CoFFEE Discusser, which are the applications used by the teacher and the students in classroom during the collaborative lesson (see Fig. 1). These applications allows the teacher and the students to use the collaborative tools provided by CoFFEE. The CoFFEE tools offer different shapes of shared work space. In the following we provide a brief description of the main tools, while details can be found in [9]. The Threaded Discussion tool allows the users to synchronously participate in a discussion which is structured in threads. The thread structure aims to overcome the main limit of the standard chat: the lack of organization of the discussion. A different kind of organization can be built through the Graphical Discussion tool. It provides a 69

80 shared graphical space where each user can contribute with his considerations, which appear as textual notes in a box. The tool allows also to link the boxed-notes through several kinds of arrows. This tool allows to create diagrams representing concepts and notes and their relationships. Both the Threaded Discussion tool and the Graphical tool aim to support the organization of ideas and the sharing of knowledge. They overcome the temporal sequence of the users contributions and allow the users to focus on the relationships among the concepts. Another tool provided by CoFFEE is the CoWriter, which allows users to collaborate in the writing of a document (even if just a user a time can write while the others can see the changes in real time). The Positionometer tool supports voting and allows to evaluate the opinion of the users about a question. Beyond the previous tools, CoFFEE offers also a Chat tool and a Presence tool, which represent the basic functionalities to support team awareness and communication. All the CoFFEE tools provide a set of configurable options, but the Threaded Discussion tool and Graphical tool offer a particularly wide set of options to configure many aspect of their appearance and behavior so that they can be used in a variety of usage scenario. The set of CoFFEE tools is extendible: it is possible to implement new tools and integrate them in CoFFEE without change the existing applications or tools, thanks to the architecture inherited from Eclipse and to the integration mechanism based on the extension point concept. Many more details about CoFFEE can be obtained by the project website (http: // and by the literature available in [7 9, 21]. 4.2 User Interface Here we are going to describe the user interface resulting from the integration between CoFFEE and RTC, as shown in Fig. 2. During the integration process we have introduced in the RTC user interface a CoFFEE toolbar with a button to start a discussion with the CoFFEE tools. From the toolbar the user can start a new discussion and choose the desired tool among the available ones. In this case he is the Coordinator of the discussion and has to provide some information, as his name and the topic of the discussion. At this point, he can start any tool in any moment. Alternatively, the user can join an existing discussion: he is provided with a list of the active discussions and he has to choose which one he want to join. In this case he is a Participant user and the tools available are decided by the Coordinator, which decides also whether to use the tools in dedicated CoFFEE perspective or in the current perspective. A discussion can be closed by the Coordinator and he can save it to the Jazz repository. The saved discussions are represented as Jazz artifacts: as shown in Fig. 2, on the left, an item named CoFFEE Discussions is visible in the Team Artifacts view, with the tree of available discussions. The saved discussions are visible also in the dedicated CoFFEE Discussions view, shown at bottom in Fig. 2, where further details, as the coordinator name, the creation date and the tags, are provided. Each discussion can be reloaded, deleted or even saved to the file system, from each user who can access to the repository, even from users which did not participate to the discussion. It is particular important the possibility to reload the discussion: this functionality opens the tools that 70

81 Fig. 2. The user interface of CoFFEE integrated in Rational Team Concert. In the toolbar, CoF- FEE adds a button (the 7th) to launch a CoFFEE discussion. In the Team Control view (on the left), there is the item CoFFEE Discussions, which are the discussions saved on the Jazz repository. The saved Discussions are visible also in the CoFFEE Discussion view (at the bottom), where more details are provided. The central view shows a discussion with the Threaded Discussion tool. where used for the discussion and replays the contributions which were made, allowing to continue the discussion, even with other users. We have to say that currently there is no mapping between RTC user roles (developer, contributor, administrator and so on) and the CoFFEE user roles (participant and coordinator), even if a more strict integration could be possible, for example to allow to start discussions only to the RTC team leader. Like other artifacts, saved discussions can be added to the Favorites folder, and organized in sub-folders by the user. Artifacts can be referenced from inside the RTC Work Items, but an easy support to this feature for CoFFEE discussions is currently under development. 4.3 CoFFEE tools at work in RTC Here we are going to show some examples of usage of the Threaded Discussion tool and of the Graphical tool. On the start up each tool provides the possibility to configure several options to customize its apparency and behavior. 71

82 Fig. 3. Discussion with Graphical Tool. The most evident configuration option of the Threaded Discussion tool is about the possibility to use categories: as shown in Fig. 2, on the left in the tool view, there is a frame named Categories, and each category identifies a discussion tree. The usage of categories allows to identify several topics related on the same argument and to build discussion for each topic in a structured tree: for example, the discussion reported in the Fig. 2 was about the Features, the Open issues and the deadlines of a release. It is also possible, on the start up of the tool, to decide not to use the categories, and then the tool will not show the Categories frame and will provide a single discussion tree. Another relevant configurable option of the tool is about the labels of the contributions: when the Coordinator starts the tool, he can decide which kind of contribute will be available and their labels. This configuration option allows to customize the tool for any kind of use. For example, in Fig. 2 the contribution types were Question, Argument or Comment. However, it is also possible to not use labels for the contributions. An option which can be configured on the start up but can be also changed at runtime is the possibility to make anonymous the contributions. This functionality can be useful to get opinions without the pressure of others judgement. Similar configuration options are available also for the Graphical tool. In Fig. 3 you can see the Graphical tool with contribution types Task, Question, Problem, Comment. This tool allows also to configure the connectors among the boxed-notes, choosing the color, the kind of line, the allowed number of bend points, and allows to create links from a box to an arrow, beyond that from a box to a box. In the example (Fig. 3), the 72

83 team members use the tool to organize the tasks, underlining the possible problems and adding their comments. 5 Architecture Now we are going to describe the steps we have performed to integrate CoFFEE into Jazz RTC. The integration of CoFFEE in RTC has implied several steps. The applications of CoFFEE which provide collaboration functionalities are the CoFFEE Controller (the Server side) and the CoFFEE Discusser (the Client side). A component diagram of the overall architecture is shown in figure 4. Since to use the CoFFEE tools we should leverage on the CoFFEE server and client, we have to integrate both the CoF- FEE Controller and Discusser in RTC without changing the underlying architecture of CoFFEE. This has required to implement a plug-in which is able to launch the CoFFEE Controller or the CoFFEE Discusser, on the basis of the users choice to coordinate a discussion or to join an active discussion. This step allows to launch the CoFFEE tools within RTC. To enhance the integration, beyond the execution of the CoFFEE tools in RTC, we have also provided the possibility to save the CoFFEE discussions on the Jazz repository. This has required to implement a data model, representing a discussion, compatible with the data handled by the Jazz repository. Moreover, to interact with the Jazz repository (to save, delete, get the discussions), we had to implement a Jazz component, consisting of a server component and a client component. The server component is integrated in the Jazz server where interacts with the repository; moreover, it answers to the request of the client component. The client component is integrated in RTC and is responsible of exchanging data (i.e. the CoFFEE Discussions) with the server component. Furthermore, to make the integration as much as possible seamless, we have added some graphical glue: we have added the CoFFEE Discussion in several points of the user interface of RTC, similarly to the other Jazz artifacts, and we have defined a new view providing the details of the Discussions. In the following, we provide details about the steps of which we have just provided an overview. Running CoFFEE in RTC. The integration of CoFFEE into RTC requires to launch the CoFFEE tools within RTC and, since RTC is Eclipse- based, this means to be able to launch CoFFEE tools within Eclipse. Since CoFFEE is already based on the Rich Client Platform, this step has been quite straightforward: we have realized an Eclipse plug-in to allow CoFFEE to run within the Eclipse UI. This plug-in contributes a toolbar which allows the user to access the CoFFEE functionalities. We have created a plug-in which can play alternatively the role of the Controller or of the Discusser. It can launch a discussion, connect to an existing discussion, or reload an old discussion. When playing the Controller s role, it can launch a tool at any time, configuring it on-the-fly or loading its configuration from a file. Some CoFFEE application-specific functionalities strictly related to the learning environment, such as loading pre-configured sessions (i.e. sequence of steps with pre-configured tools), have been excluded. 73

84 Fig. 4. Component-based architecture of the integration between CoFFEE and RTC. The integration of CoFFEE in RTC goes beyond the execution of CoFFEE tools within the RTC environment: we have modified the functionality of saving/loading a discussion to store the CoFFEE discussions in the Jazz repository, so that they will be available to all the users even successively to the discussion. The CoFFEE data model. To store data into the Jazz repository, it is necessary to create a data model compatible with the data handled by the Jazz repository. We have created a specific data model for CoFFEE discussions which consists of: the title of the discussion (string); some tags, which can be useful for a database search (string); the discussion log (binary large object). The model has been created using the Eclipse Modelling Framework [22]. The data model must be added on the Jazz repository to enable Jazz to manipulate that kind of data, so we have added the data model to the Jazz repository by creating an update site and exploiting the Jazz server provisioning mechanism. The definition of the CoFFEE data model is not enough to save the discussions on the Jazz repository: the interactions between RTC and the repository happen through Jazz components, which consist of a server side (integrated in the Jazz server) and of a client side (integrated in RTC). The CoFFEE-Jazz component. To interact with the Jazz server, therefore, we have created a CoFFEE-Jazz component dedicated to manage CoFFEE discussions. The server side and the client side of the CoFFEE-Jazz component communicate to each other via SOAP/XML (see figure 4). The CoFFEE-Jazz client component is a plugin installed in RTC and extends the extension point This extension will enable the Jazz client infrastructure to locate the interface of the library and to create an instance of its implementation. The client component implements a library which 74

85 allows to send/receive the modeled data to/from the server. The operations provided are: getting all discussions; reading a discussion; storing a discussion; deleting a discussion. The client library is used by the CoFFEE plug-in and by the CoFFEE artifact to perform the operations above. The CoFFEE-Jazz server component is a plug-in installed in the Jazz server and extends the extension point, which is used to configure the service. The CoFFEE-Jazz server component interfaces to the Jazz repository to perform the operations on data, and exposes the methods which are called by the client via the SOAP/XML mechanism provided by the Jazz platform. The CoFFEE Discussion artifact. To make the integration among CoFFEE and RTC as much as possible seamless, we have introduced graphical items in RTC to manage the CoFFEE Discussions as the other Jazz artifacts. To show the CoFFEE discussions in the RTC interface, and to allow operations on them, we have created a new Jazz artifact type. Jazz artifacts are almost all the objects shown in the RTC interface, and on which RTC users perform operations. By contributing to some extension points, a new artifact type can be shown in different places of the RTC UI and can allow a quantity of operations. This is the list of the extension points contributed by the CoFFEE artifact: defined by RTC, allowed us to show the CoFFEE discussions in the Team Artifacts view; defined by RTC, allowed us to enable the CoFFEE discussions to be selected as favorites in the Team Artifacts view; org.eclipse.ui.popupmenus defined by Eclipse, allowed us to add entries to the contextual menu of the CoFFEE Discussion (selected a discussion, the contextual menu allows to Load Discussion, Save to Filesystem, Delete, Add to Favorites); org.eclipse.core.runtime.adapters defined by Eclipse, to adapt the artifact type to a label provider used to render elements of the type; org.eclipse.ui.views defined by Eclipse, allowed us to define a dedicated, full-detailed view for the CoFFEE Discussions. The view contains the author, the creation date, the title and the tags of each discussion. We want to emphasize that the integration of CoFFEE in RTC has been possible thanks to the extensibility and composability of Eclipse and Jazz, the platforms on which CoFFEE and RTC leverage on. The plug-in based architecture of Eclipse is the basis which has allowed the composition of different systems to create en enriched environment by the seamless integration of their functionalities. 6 Conclusions and Future Works The software development is a work activity inherently collaborative and communication among all the involved people is crucial for the coordination of the project. Several 75

86 systems offer mechanism to support the coordination activities and the work flow [23 25]. However, these systems do not provide communication and collaboration functionalities integrated in the development environment. The idea to integrate communication and collaboration functionalities in the IDEs rises from the necessity to support informal communication, that is a quick and unplanned exchange of information which often happens among developers of co-located team. The unplanned communication in co-located team is not tracked in the software development environment, and the exchanged information will be available only to the people involved in the communication event. In remote teams the unplanned communication is much more unusual, due to several obstacles (different time zones, lack of awareness about others skills, lack of chance). Given this considerations, the idea is to integrate communication and collaboration functionalities in a IDE, to enable the exchange of information among the developers and at the same time to record these information in the work environment. Therefore, our work aims to integrate collaborative functionalities coming from CoFFEE, an Eclipse-based suite of applications to support collaborative learning, into RTC, an Eclipse-based development environment providing advanced functionalities to support coordination activities, as presented in [10]. In this paper we present the architecture which is the basis of that result and which derives from the design choices of CoFFEE and RTC of leveraging on Eclipse and Jazz. These choices have been crucial to allow the integration, since the open source nature of Eclipse and Jazz and their composability and extensibility are the key points on which our work is based on. However, our work has been conducted not without some difficulties: as stated by a group of IBM researchers [26] (from Watson, Haifa and China IBM Research Centers, with experiences also in developing Enseble [6]), the lack of well structured and complete documentation is among the difficulties in extending RTC. It has been sometime very complex and time consuming to identify correct patterns for extending some functionalities in RTC, often solved only by educated guessing, supported by some experience on the Eclipse platform and a bit of intuition (not to mention luck!). Acknowledgments: The authors gratefully acknowledge IBM for partially supporting, with the IBM Jazz Innovation Award 2008, the research described in this paper. References 1. IBM: Rational Team Concert. (July 2010) 2. Eclipse. (2010) 3. IBM: Jazz Web site. (July 2010) 4. Jazz Community: Research Projects. relatedresearchprojects.jsp (July 2010) 5. Calefato, F., Gendarmi, D., Lanubile, F.: Adding Social Awareness to Jazz for Reducing Socio-Cultural Distance within Distributed Development Tools. In: Proc. of the 4th Italian Workshop on Eclipse Technologies, Sept , 2009, Bergamo (Italy). (2009) Xiang, P.F., Ying, A.T.T., Cheng, P., Dang, Y.B., Ehrlich, K., Helander, M.E., Matchen, P.M., Empere, A., Tarr, P.L., Williams, C., Yang, S.X.: Ensemble: a recommendation tool for promoting communication in software teams. In: RSSE 08: Proceedings of the 2008 international workshop on Recommendation systems for software engineering, New York, NY, USA, ACM (2008)

87 7. De Chiara, R., Di Matteo, A., Ilaria Manno, Scarano, V.: CoFFEE: Cooperative Face2Face Educational Environment. In: Proceedings of the 3rd International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2007), November 12-15, 2007, New York, USA. (2007) 8. Belgiorno, F., De Chiara, R., Manno, I., Overdijk, M., Scarano, V., van Diggelen, W.: Face to face cooperation with CoFFEE. In: Proceedings of 3rd European Conference on Technology- Enhanced Learning (ECTEL 08), September , Maastricht, The Netherlands. Lecture Notes in Computer Science, Springer-Verlag (2008) 9. De Chiara, R., Manno, I., Scarano, V.: CoFFEE: an Expandable and Rich Platform for Computer-Mediated, Face-to-Face Argumentation in Classroom. In: In Educational Technologies for Teaching Argumentation Skills. Bentham ebooks (in press) 10. Belgiorno, F., Manno, I., Palmieri, G., Scarano, V.: Argumentation Tools in a Collaborative Development Environment. In: Proc. of 7th International Conference on Cooperative Design Visualization and Engineering (CDVE 2010). Lecture Notes in Computer Science, (Springer Berlin / Heidelberg) (In press) 11. Aydin, S., Mishra, D.: A tool to enhance cooperation and knowledge transfer among software developers. In: CDVE. (2009) Sengupta, B., Chandra, S., Sinha, V.: A research agenda for distributed software development. In: ICSE 06: Proceedings of the 28th international conference on Software engineering, New York, NY, USA, ACM (2006) Cheng, L.T., de Souza, C.R., Hupfer, S., Patterson, J., Ross, S.: Building collaboration into ides. Queue 1(9) (2004) Booch, G.: Introducing collaborative development environments. Technical report, IBM Rational (2006) 15. Carmel, E., Agarwal, R.: Tactical approaches for alleviating distance in global software development. IEEE Software 18(2) (2001) 16. Lanubile, F.: Collaboration in distributed software development. In: ISSSE. (2008) Herbsleb, J.D., Grinter, R.E.: Splitting the organization and integrating the code: Conway s law revisited. In: ICSE 99: Proceedings of the 21st international conference on Software engineering, New York, NY, USA, ACM (1999) Gutwin, C., Greenberg, S., Blum, R., Dyck, J., Tee, K., McEwan, G.: Supporting informal collaboration in shared-workspace groupware. J. UCS 14(9) (2008) Hupfer, S., Cheng, L.T., Ross, S., Patterson, J.: Introducing Collaboration into an Application Development Environment. Technical Report 04-12, IBM Research, Collaborative User Experience Group (2004) In CSCW 04, - research.nsf/pages/papers.html. 20. CollabVS: Collaboratvie development environment using visual studio. 21. Belgiorno, F., De Chiara, R., Manno, I., Scarano, V.: A Flexible and Tailorable Architecture for Scripts in F2F Collaboration. In: Proceedings of 3rd European Conference on Technology-Enhanced Learning (ECTEL 08), September , Maastricht, The Netherlands. Lecture Notes in Computer Science (5192), Berlin, Heidelberg, Springer- Verlag (2008) Eclipse Modelling Framework CollabNet: Sourceforge. (2010) 24. Group, G.: Gforge. (2010) 25. Edgewall Software: The trac project. (2010) 26. Cheng, P., et al.: Jazz as a research platform: experience from the Software Development Governance Group at IBM Research. In: Proc. of First International Workshop on Infrastructure for Research in Collaborative Software Engineering (IRCoSE) at FSE (2008) 77

88 Apprendimento collaborativo fra team universitari in progetti didattici mediante l'uso di RTC Paolo Maresca 1, Lidia Stanganelli 2 1 Dipartimento di Informatica e Sistemistica, Università di Napoli Federico II 2 Dipartimento di Informatica Sistemistica e Telematica, Università di Genova Abstract. L'apprendimento collaborativo apre nuove frontiere per l E-learning, in termini di conoscenze che possono essere condivise e utilizzate da un ecosistema costituito dalle persone che partecipano a progetti educativi universitari. Inoltre, questo nuovo paradigma è l'occasione per ripensare i contenuti attorno ai quali le persone virtualmente imparano. Contenuti che devono essere, sempre più, servizi di una rete e questo costituisce una innovazione in materia di e-learning. Questo progetto, denominato ETC, di cui discuteremo lo stato di avanzamento, vede coinvolti IBM Rational e IBM Academic Initiative Italia, con la comunità italiana di Eclipse e 9 università italiane, tra cui Università di Napoli Federico II, Università di Bari, Università di Milano Bicocca, Università di Bologna, Università di Bergamo, Università di Genova, Università di Salerno, in un processo di apprendimento collaborativo su progetti inerenti a tematiche didattiche comuni. Keywords: E-learning, grid-learning, Eclipse and Jazz Technologies. 1 Introduzione Il termine grid era nato per le reti di distribuzione dell'energia elettrica nelle quali non era importante conoscere da quale stazione si prende l'energia ma è essenziale sapere come connettersi alla rete. Questo termine è stato successivamente preso in prestito per illustrare una innovazione nell'ambito dei computer assumendo il nome di grid computing. Con grid computing si intende un modello in grado di migliorare la memoria e potenza di calcolo prendendo le risorse dalla rete. Il grid computing inaugura anche una nuova stagione per lo sviluppo dell'e-learning [1] perchè: a) la potenza dei computers distribuiti in una rete grid può essere adoperata per costruire laboratori virtuali; b) i contenuti saranno distribuiti e la formazione delle classi virtuali sarà personalizzata; c) bisognerà immaginare risorse didattiche nuove da utilizzare in questo nuovo modello; d) sarà possibile una migliore collaborazione fra le persone ed il trasferimento di conoscenza più rapido di quello che si ottiene nella formazione frontale a causa del cooperative learning. L' apprendimento cooperativo è invogliato dal grid computing, soprattutto se inteso come trasferimento rapido di conoscenza da chi la possiede a chi la richiede, anche in relazione, come detto al punto 1, di una maggiore potenza di calcolo messa a 78

89 disposizione dalla rete grid che consentirebbe l'allestimento di laboratori complessi prima non fruibili. L apprendimento cooperativo viene definito [2] come una attività nella quale: piccoli gruppi di studenti lavorano in squadra per risolvere un problema, eseguire un compito o raggiungere un obiettivo comune, in pratica è un metodo didattico che utilizza piccoli gruppi in cui gli studenti lavorano insieme per migliorare reciprocamente il loro apprendimento. Questo approccio si differenzia sia dall apprendimento competitivo (in cui gli studenti lavorano l uno contro l altro per raggiungere un giudizio migliore di quello ottenuto dal compagno) che da quello individualistico (in cui gli studenti lavorano da soli per raggiungere obiettivi di apprendimento indipendenti da quelli dei compagni). Si noti che entrambi i due tipi di apprendimento, che hanno indiscussi vantaggi, risultano essere, per il gruppo, un gioco a somma zero: chi vince porta via la posta. In concreto perde tutto il gruppo il cui apprendimento non cresce. Invece in un apprendimento cooperativo si ha un obiettivo comune ed un gioco di squadra nel quale non esiste una posta da portare via ma un obiettivo da perseguire. Tale obiettivo costituisce il pretesto per attivare il trasferimento della conoscenza che, in uno scenario distribuito geograficamente, abilita il trasferimento della conoscenza fra studenti appartenenti a realtà diverse. Insomma se si vuole accrescere l apprendimento di un gruppo allargato bisogna mirare ad un meccanismo che non ostacoli il fluire della conoscenza con un gioco a somma zero. Il progetto ETC (Enhancing Team Cooperation using Rational Team Concert- RTC) si pone l'obiettivo di effettuare un esperimento di apprendimento cooperativo. A differenza dell apprendimento competitivo e di quello individualistico, che non sempre si possono usare in maniera appropriata, l apprendimento cooperativo può essere applicato a ogni compito, ogni materia e ogni curricula. L'idea di questo lavoro è applicare l'apprendimento cooperativo ad una delle attività inserite nell'ambito di corsi di primo livello come ingegneria del software, basi di dati, fondamenti di informatica III, sviluppo di applicazioni web, etc. e per i quali sono richiesti progetti da sviluppare spesso in cooperazione fra gruppi con lo scopo anche di ottimizzare lo sforzo di apprendimento dello studente universitario medio, che nel contesto legislativo vigente, fruisce di corsi compatti e al quale viene richiesto di maturare l'acquisizione delle conoscenze relative in uno spazio temporale limitato. Inoltre mettere in contatto i gruppi con comunità di pratica a elevata capacità di sviluppo progettuale come quella di eclipse [3] per fare in modo che i migliori possano fungere da portante di conoscenza dalle università alle aziende sviluppando progetti di elevato contenuto innovativo presso la comunità stessa, ci sembra essere una pratica facilitante al fine di migliorare la loro rapidità di apprendimento. Il lavoro è così articolato, nella sezione 2 si illustrerà l'architettura del progetto; nella sezione 3 si discuterà dei progetti e nella sezione 4 si discuteranno le conclusioni e gli sviluppi futuri. 79

90 2 Architettura del progetto L'obiettivo di questa sezione è illustrare l'architettura del progetto. L'idea è quella di costruire un ambiente aperto nel quale gli studenti siano capaci di cooperare ed innovare costruendo progetti nell ambito di corsi fruiti nei propri atenei. Inizialmente gli studenti coopereranno su tematiche relative a corsi quali: ingegneria del software e programmazione web, ma potranno in futuro cooperare anche su altre tematiche quali project management, risk management etc. Progetti fra università saranno costruiti al fine di cooperare con la comunità eclipse italiana [4]. A causa della complessità dell'architettura suddivideremo questa sezione in 4 sotto-sezioni: La prima si occuperà della architettura hardware, la seconda dell'architettura software, la terza nell'architettura peopleware, la quarta nella architettura grid-knowledgeware. Quest'ultima si occuperà più specificamente di come in una rete di innovazione aperta gli studenti siano in grado di cooperare innovando in una sorta di bottega rinascimentale virtuale. 2.1 Architettura Hardware In questa sezione viene illustrata l'architettura hardware e software del progetto. Il progetto viene sostenuto dall'uso di un server fornito da IBM Italia [3] in comodato d'uso alla comunità italiana di eclipse e installato presso la facoltà di ingegneria dell'ateneo Federico II di Napoli. Questa macchina ha le seguenti caratteristiche: x3650m2, Xeon 4C E W 2.4GHz/1066MHz FSB/8MB L3, 2x1GB, O/Bay 2.5in HS SAS, SR BR10i,CD-RW/DVD Combo, 675W p/s, Rack; IBM 146 GB 2.5in SFF Slim-HS 15K 6Gbps SAS HDD (4) Redundant 675W Power supply (1). e deve supportare un numero di accessi contemporanei che, per quest'anno, sono stati stimati essere in un numero compreso fra 150 e 200. Dal punto di vista della connettività con gli altri dispositivi della rete relativa al progetto la fig.1 mostra un esempio di organizzazione. 80

91 Figura 1. Architettura hardware di ETC Come si nota sia l'application server che il database server, incluso il relativo firewall, sono allocati in un server che si trova presso la comunità italiana di eclipse della Facoltà di ingegneria dell'ateneo Federico II di Napoli e tutte le altre università sono collegate ad esso attraverso collegamenti veloci e multidispositivi che vedono sia l'indipendenza dalla piattaforma software che da quella hardware. 2.2 L'architettura software del progetto GRID-ETC La piattaforma software è l'arma più potente di questo progetto. La piattaforma software sarà una piattaforma evoluta di sperimentazione nella quale, oltre ai tools standard dell academic inititiative IBM, si includono alcuni tools awarded della stessa IBM per la loro innovation ed in particolare per la sua interazione fra le persone e per la conduzione delle fasi di condivisione e negoziazione delle specifiche. Lo strumento di punta su cui si baserà la sperimentazione collaborativa dei gruppi di studenti sarà costituito da Rational Team Concert (RTC). Jazz è una infrastruttura collaborativa per far in modo che gli utenti collaborino per raggiungere un obiettivo comune. Insomma è una piattaforma di collaborazione per i team, scalabile ed estendibile per l'integrazione delle attività durante il ciclo di vita del software. I prodotti Jazz 81

92 costituiscono un approccio innovativo per l'integrazione basata su servizi flessibili e aperti sfruttando l'architettura di Internet. Il progetto Jazz è composto da 3 elementi: Architettura per l integrazione del ciclo di vita Un portfolio di prodotti progettati per il team Una comunità di stakeholders Il focus di questo prodotto non è più né lo sviluppo del software né l'individuo ma la collaborazione del team. I prodotti RTC, RQM and RRC sono tutti basati sulla architettura Jazz, vedi fig. 2: Rational Team Concert (RTC) Un ambiente di lavoro collaborativo per sviluppatori, architetti e manager di progetti software. Esso possiede la possibilità di gestire work item, controllare il codice, build management, e iteration planning. Supporta ogni tipo di processo software fra cui: Scrum and OpenUP. Rational Quality Manager (RQM) E' un ambiente web-based per il management test cases di elevata qualità professionale. Esso fornisce una soluzione personalizzabile per la pianificazione di test, controllo del flusso di lavoro, tracking e reporting in grado di quantificare l'impatto delle decisioni di progetto sugli obiettivi di business. Rational Requirements Composer (RRC) Costituisce una soluzione sofisticata per la definizione dei requisiti del software che include la progettazione visiva ed elicitazione dei requisiti. L'architettura software di ETC è una peculiarità importante del progetto in quanto deve essere versatile (applicabile ad ogni corso della sperimentazione) e deve anche essere fruita da studenti appartenenti a più livelli (triennale, specialistica o magistrale). Allo stato, la stessa architettura di RTC è in fase di evoluzione nei laboratori IBM del brand rational sito in Raleigh USA. Un utilissimo lavoro in tal senso è stato presentato a tale scopo nel workshop di Eclipse-it 2009 da Scott Rich, Distinguished Engineer presso IBM Rational Laboratories di Raleigh [5] che mostra tutta la evoluzione strategica di tale importante strumento di cooperazione. Intanto una snapshot della architettura può essere fruita in fig.2 essa è un ibrido fra una architettura a strati (three tier) ed una architettura orientata ai servizi e si fonda sul paradigma della cooperazione per lo sviluppo dei progetti, pertanto ha sia gli strumenti per il concepimento dei progetti che per la sua realizzazione e la verifica della qualità. 82

93 Figura 2. Architettura di RTC (si ringrazia Scott Rich di IBM) Come è possibile constatare l'architettura di RTC è innovativa in quanto coniuga sia il processo di sviluppo che gli strumenti di lavoro necessari per implementarli. Soprattutto possiede, built-in, un certo numero di modelli di processo di sviluppo fra questi quelli agili. Si ritiene infatti che questi ultimi potranno essere oggetto della sperimentazione dato il tempo ristretto per la realizzazione dei compiti (9 settimane circa). In particolare si pensa di adottare SCRUM come processo agile di sviluppo e OpenUp come processo standard. 2.3 L'architettura peopleware e toolware del progetto GRID-ETC Il progetto GRID-ETC ha una organizzazione peopleware complessa. Esso è costituito da 9 sedi universitarie (Federico II, Bologna, Bergamo, Milano Bicocca, Genova, Savona, Salerno, Bari, Taranto) con 350 studenti registrati e con 8 champion students (circa uno per ogni sede) i quali sono dottori di ricerca o dottorandi, un responsabile di piattaforma e amministratore di sistema ed almeno una quindicina fra professori, ricercatori e responsabili dei corsi sotto sperimentazione. Inoltre c'è la organizzazione dei corsi che localmente, si sviluppa in completa autonomia. Dal momento che ogni corso ha bisogno di sviluppare i propri contenuti, la sua organizzazione richiede anche i suoi strumenti per essere svolta. Non è escluso l'uso di altri strumenti per esigenze che dovessero emergere al momento della realizzazione dei singoli progetti. Vale la pena di sottolineare come non sia standard l'uso di strumenti che in diverse realtà e contesti dovrebbero essere utili. Nella fig. 3 si riporta un esempio di associazione corsi-strumenti che è stata fatta per il corso di ingegneria del software dopo averla concordata con tutte le sedi nelle quali il corso viene tenuto. Inoltre, dallo stesso frammento di mappa mentale, sviluppata in maniera collaborativa, si evince un dettaglio dell'agenda, che al momento della sua scrittura prevedeva una sequenza di task strutturati, alcune critical issues e lo stato del progetto. Vale la pena di sottolineare come questa tipologia di conduzione delle 83

94 attività è un modo per meglio cooperare soprattutto quando ci sono più unità impegnate e distribuite geograficamente. Oltre ciò lo strumento può essere utilmente usato dagli studenti che, svolgendo i loro compiti, hanno lo scopo di rappresentare l'evoluzione delle loro attività in maniera facilmente comprensibile. Ritornando al corso di ingegneria del software (vedi fig. 3) ha delle associazioni (link) con i tools della piattaforma ETC ed in particolare per tale insegnamento si userà RTC in modo Figura 3. Associazione corsi-tools e agenda del progetto ETC da consentire la cooperazione fra gli studenti RRC/RRP per la definizione e la condivisione dei requisiti, RSA per il progetto architetturale dettagliato, Eclipse per lo sviluppo Java e Junit per il collaudo, infine rational quality manager per la costruzione dei casi di prova. Questo è un esempio di come si compone la piattaforma GRID-ETC una volta scelto il corso. Il corso di ingegneria del software così costruito con le risorse grid può essere fruito dalle università di Napoli Federico II, Bologna e Milano Bicocca. Un altro esempio è stato il corso di Progettazione di applicazioni web-based. In questo caso serve RTC per la cooperazione fra gli studenti e RRC/RTC per la composizione dei requisiti, per la costruzione delle pagine web può essere adoperato direttamente eclipse con il plug-in WTP o drupal, questo è il caso di Genova-Savona. Infine il corso di fondamenti III, tenuto a Bergamo, ha avuto il compito di sviluppare attività di testing su codice java esistente e dunque la piattaforma è stata personalizzata utilizzando i seguenti moduli: RTC per il coordinamento dei gruppi e delle persone partecipanti ai progetti, RQM per la costruzione dei casi di prova ed Eclipse con JUnit per il testing. 2.4 L'architettura knowledge-sharing del progetto GRID-ETC Un'altro punto fondamentale secondo noi è la rete di conoscenza che si è creata in questo progetto e l'osservazione dei meccanismi di creazione/disseminazione e fruizione della conoscenza. Il progetto GRID-ETC (che serve a sperimentare ETC in una logica distribuita) ha una organizzazione knowledge-sharing aperta in quanto si basa sul concetto dell'open innovation network nella quale tante botteghe rinascimentali locali contribuiscono alla crescita della conoscenza condivisa e distribuita nella rete. I progetti, come si vedrà nella sezione successiva sono molto diversificati, ma richiedono talvolta una serie di conoscenze comuni. Intanto 84

95 importante è l'esistenza, in ogni sede della sperimentazione, di una figura che abbiamo denominata champion student, ovvero uno studente molto più esperto della media degli studenti che sono ammessi alla sperimentazione. Tale studente è uno studente di dottorato di ricerca tipicamente con laurea quinquennale in ingegneria del settore dell'informazione o con laurea specialistica nello stesso settore. Il suo compito è quello di guidare i gruppi e di interfacciarsi sia con il docente di riferimento che con gli altri champion student di altre sedi. Interessante anche le attività relazionali fra i team costituito dai champion students, infatti questo team costituisce proprio il supporto dell'intero progetto. Oltre a ciò esiste una figura anche più esperta del team di champion ed è quella del responsabile del sistema hardware e software installato egli è uno studente di dottorato il quale ha la visibilità della gestione e dei contenuti esistenti oltre a gestire i servizi e a controllare che tutti i progetti in essere siano in ordine. In questo modo la sperimentazione ETC somiglia molto di più ad una bottega rinascimentale distribuita sul territorio e, da questo punto di vista, costituisce l'occasione per organizzare e sostenere progetti, facendo fluire la conoscenza attivando il più possibile meccanismi di scambio e facilitando le interazioni. Proprio a tale scopo un team speciale lavora attorno all'uso di strumenti che consentano alla rete di conoscenza di apprendere con maggiore facilità attraverso lo studio e lo sviluppo di tools (non presenti nel software fornito da IBM), ma che possono essere necessari per implementare alcuni corsi. Si ricorda infatti che i tools che l'academic initiative ha messo a disposizione del progetto ETC sono utilizzati da ricercatori esperti e questa è la prima occasione nella quale vengono sperimentati in una attività didattica interuniversitaria. A riprova che l'esigenza di strumenti innovativi sia necessaria, in fig. 4 viene riportata una soluzione per sincronizzare la documentazione effettivamente utilizzabile ed approvata dal team dei champion student. Il team di progettazione ha realizzato un plug-in eclipse che provvede ad allineare la documentazione e la versione giusta per tutti i 350 partecipanti al progetto 85

96 Figura 4. ETC : strumento di condivisione della documentazione Dalla figura si evince l'esistenza di diverse risorse e la loro classificazione (e-book, readme, tutorial, slides, etc). In questo modo lo studente non deve più preoccuparsi di ricercare la risorsa ma semplicemente lanciando eclipse client con RTC deve solo lasciare aggiornare il file xml del suo plug-in e nel menu ingsoftwareplugin dovrà selezionare il comando View contents per aggiornare il portfolio delle risorse didattiche esistenti. Si nota anche che il plug-in si trasforma in un servizio web più efficace ed efficiente di una ricerca web fatta dallo studente. 3 I progetti ETC: primi risultati e commenti L'attività di costruzione di progetti fra diverse università è stata molto importante in questa prima fase di ETC. Si è pensato di dare spazio sia a progetti localmente rilevanti in maniera da conservare le tradizioni del relativo insegnamento sia di inaugurare una stagione di progetti fra università che vedessero corsi con la stessa denominazione avere progetti che coinvolgessero studenti di università diverse. In questo modo sono nati sia progetti interuniversitari che intrauniversitari. In fig. 5 viene mostrata una vista di RTC nella quale vengono, sul lato sinistro, mostrati i progetti attualmente attivi nella sperimentazione di ETC. A questi progetti prendono parte: studenti, champion students, professori, tesisti per un totale di circa 350 utenti. In particolare degli 11 progetti censiti 6 sono condotti da più università (vedi progetto OTRE [7] - On the road eclipse - che vede coinvolti Federico II- Genova - Savona). Tre sono progetti locali (Federico II) di cui uno di reingegnerizzazione del sito della comunità eclipse italiana ed un progetto di servizio a ETC per lo sviluppo di nuove tecnologie a supporto della fruizione e disseminazione della conoscenza. Inoltre l'università di Bergamo ha sviluppato una decina di progetti relativi al testing di 86

97 sistemi software esistenti mentre l'università di Bologna Alma Mater è in fase di startup su un progetto che coinvolge le fasi di disegno del software insieme all'università Federico II. Fig. 5 ETC: l'area dei progetti Dalla fig. 5 si nota anche, lato destro, che ogni progetto (nella figura OTRE - Single-sign-in system) possiede il suo team ed ogni team è costituito da persone le quali svolgono ruoli diversi (ma qualche volta anche lo stesso ruolo) nell'ambito di un processo predefinito per lo sviluppo del software (in questo caso OpenUP). Il processo vincola le persone a modi e tempi di attività ed inoltre mette a disposizione uno spazio comune e la gestione del versioning di tutta la documentazione prodotta per il processo stesso, riducendo l'overhead sul proprio pc e sfruttando le facilities grid della rete e della applicazione ETC. In altre parole nessun studente, indipendentemente dalla sua posizione geografica deve preoccuparsi di dove sono i dati del progetto, in quanto questi sono tenuti automaticamente dove il responsabile del progetto ha deciso di archiviarli (anche localmente su uno dei server). 87

98 4 Conclusioni e sviluppi futuri L'obiettivo di questa sezione è illustrare i primi risultati del progetto ETC e commentarli oltre che delineare gli sviluppi futuri. L'idea di costruire un ambiente aperto nel quale gli studenti e docenti siano capaci di cooperare ed innovare costruendo progetti nell ambito di corsi fruiti nei propri atenei è molto piaciuta. Ci sono molti motivi fra questi, al primo posto, la possibilità di consentire senza quasi nessun problema l'interazione fra studenti di facoltà diverse su progetti riguardanti lo stesso contenuto didattico (es. ingegneria del software). Come secondo vantaggio, l'opportunità data ai docenti di poter cooperare sugli stessi contenuti didattici consentendo loro di far veicolare molto più efficacemente le innovazioni anche in una ottica grid-e-learning. Come terzo, visto che si parla di innovazione, si ritiene che concretamente ETC rappresenti una esperienza che aiuta ad innovare e a mostrare ai nostri studenti cosa sia l'innovazione 1 e come questa si crea attraverso l'uso di strumenti software di altissimo livello. Il progetto ETC rappresenta una innovazione esso stesso in quanto è un ambiente speciale nel quale, mentre si costruiscono progetti, si innova anche il modo di imparare e di formare le nuove generazioni. Infine come ultimo vantaggio che si desidera citare è il rapporto con le aziende di elevato spessore nel campo della ricerca applicata del software. Grazie alla cooperazione con IBM in particolare con Rational Italia e IBM Academic Initiative questo progetto è stato l'occasione per molte decine di studenti di interagire con ricercatori IBM. Un occasione per avvicinare l'università alla azienda progettando un percorso formativo insieme. Vale infine la pena di sottolineare come l'uso di tecnologie grid consenta di migliorare anche la formazione e dunque l'e-learning: il progetto ETC ne è una testimonianza sebbene ci sembra che debbano essere ripensate le modalità con le quali si costruiscono le risorse di e-learning. Contenuti che devono essere, sempre più, servizi di una rete grid e questo costituisce innovazione in materia di e-learning. Non a caso un primo servizio della rete grid di ETC è costituita dalla documentazione del progetto che è stata trasformata in un servizio web sempre aggiornata e mai off-line. Questa considerazione ci ha fatto rilevare l'abbassamento dei tempi di installazione e di fruizione dell'ambiente del progetto ETC a carico degli studenti proprio perché essendo adesso la documentazione un servizio web, lo studente medio non perde più il suo tempo a cercarla nei meandri della rete. Riteniamo che tale nuova filosofia migliori sia la produttività che la disseminazione dei concetti abbassandone i tempi di metabolizzazione. Un team di tesisti del progetto sta lavorando attualmente alla trasformazione in servizi di molte attività fra queste quella di realizzare dinamicamente nuova documentazione e di condividerla con la rete di persone impegnate nel progetto ETC. Vale anche la pena di sottolineare come ETC sia anche una bottega per la sperimentazione di applicativi innovativi che una volta sperimentati possano essere adoperati negli stessi tool IBM come modifiche da inglobare nei tools commerciali. 1 Mentre per invenzione si intende quell'atto creativo frutto di un aspetto del problem solving, l'innovazione ne è l'applicazione, il perfezionamento dell'invenzione e la sua trasformazione in prodotti utilizzabili. 88

99 Nel prossimo anno di sperimentazione si pensa di coinvolgere un numero maggiore di università fino ad arrivare a un migliaio di studenti e una decina di università con una decina di corsi e parimenti si intende studiare la rete di conoscenza che viene a crearsi attorno alle attività messe in essere anche in relazione ad altre reti di conoscenza come la comunità open source di eclipse sia italiana che europea. References 1. Pancratius, V., G. Vossen, Towards e-learning grids:using grid computing in electronic, learning in IEEE workshop on Knowledge grid and grid intelligence, (2003). 2. Johnson D. W., Johnson R. T., E. Holubec, E.: Apprendimento cooperativo in classe, Erickson, (1994). 3. IBM: 4. Eclipse italian community 5. Rich S., The Modern Platform for Software Engineering Tools in Gargantini A. (eds) Proc. of the 4th Workshop of Eclipse italian community, Bergamo, Italy, 31-38, ISBN , (2009). 6. Maresca P., Projects and goals for the eclipse italian community in Proceedings of Fourteenth International Conference on Distributed Multimedia Systems (DMS2008), Boston, editor knowledge systems Institute graduate school, USA, , ISBN , (2008). 7. Coccoli M., Maresca P., L. Stanganelli, Enforcing Team Cooperation: an example of Computer Supported Collaborative Learning in Software Engineering in Proceedings of Seventeen International Conference on Distributed Multimedia Systems (DMS2010), Oak Brook, Illinois, USA October 14 to 16, 2010, editor knowledge systems Institute graduate school, USA, 2010, to be published. 89

100 Enforcing Team Cooperation Using Rational Software Tools into Software Engineering Academic Projects Mauro Coccoli 1, Paolo Maresca 2, and Lidia Stanganelli 1 1 Dipartimento di informatica, sistemistica e telematica, Università di Genova Viale Causa 13, 16145, Genova, Italy {mauro.coccoli, 2 Dipartimento di informatica e sistemistica, Università di Napoli Federico II Via Claudio 21, 80125, Napoli, Italy Abstract. In this paper, the activity related to the development of an educational project called OTRE is reported. The project involves a number of students from different Universities, working on activities to be conducted on a distributed architecture developed with the specific aim of enforcing team cooperation. This e-learning project is participated by IBM Rational and IBM Academic Initiative Italy, together with the Italian Eclipse community and several Italian universities, including University of Napoli Federico II, University of Milano Bicocca, University of Bologna, University of Bergamo, University of Genova, University of Salerno, in this first experimentation. Keywords: Distributed learning, Cooperative learning, Software engineering, Open ecosystem. 1 Introduction In this paper, the OTRE Project is considered. OTRE is the acronym for "On The Road to Eclipse-it 2010" and the project has been conceived so to give some results, to show out to scientific and academic community, the educational experience carried on with students working with the Eclipse environment. It has been launched in collaboration with the Italian branch of the Eclipse Community, with the objective of enforcing and enlarging cooperation activities among a large number of students, all involved in software engineering courses at the Italian Universities. In fact, the main 90

101 topic of the OTRE project is the creation of an effective cooperative learning project, to be used for higher education on team cooperation, in software engineering classes for the analysis, design, and development of software programs along their lifecycle. Cooperative learning is defined as an activity in which "small groups of students work in teams to solve a problem, execute a task or achieve a common goal"[1]. In practice, it is an educational strategy that uses small groups in which learners have to work together to achieve a common objective; in this way, they have to interact and the individual work depends on the others work so that the students can mutually improve their learning. This approach differs both from competitive learning, in which students work against each other to achieve a better opinion than that obtained from the companion, and from individualistic learning, where students work alone to achieve the learning objectives independent from those of his companions. It is worth noticing that these latter types of learning, which have undisputed advantages, appear to be for the group, a zero-sum game: the winner takes away the prize! What actually happens is that the entire group loses the opportunity to enhance their learning. On the opposite, in a cooperative learning strategy, a common objective is set and a team game takes place, in which there is no prize to take away but a goal to reach. This goal is used as a pretext to stimulate interaction among learners, and to push knowledge transfer. Moreover, if the team is composed by people that are not in the same room at the same time, a scenario of asynchronous and geographically distributed human resources arises, that requires the transfer of knowledge among different places and among students belonging to different realities. In a few words, to the aim of enhancing the learning of a large group, a mechanism has to be provided, that does not impede the flow of knowledge with a zero-sum game! Moreover, in contrast to the individualistic and competitive learning, which cannot always be used properly, the cooperative learning can be profitably applied to every task, every subject and every curriculum. The idea that lies behind this paper is that of applying cooperative learning in software engineering academic projects. To this aim let us consider courses such as software engineering, databases and information systems, computer programming, web application development, web design, and others. In such courses, the practical activity is very important and students are asked to design and/or develop some small pieces of more complex software systems. In most cases, it is requested by teachers that such small projects are developed in cooperation among groups, with the aim of optimizing the effort of student learning, who, in the current Italian legislative framework, must attend compact courses in a short and intense semester. Students will work with researchers from Eclipse Italian community [2]. This will reinforce the knowledge transfer between universities and companies from which the student is doing the internship. This relationship seems to be a good practice to improve their speed of learning. In conclusion, according to the above considerations about team cooperation and on its effectiveness on learning, the authors present the results of an experiment on cooperative learning carried on within the Eclipse Italian community. The work has taken place in a working environment specifically designed for the enforcement of the team cooperation activities, within the framework of a project called ETC, Enforcing 91

102 Team Cooperation, which exploits the IBM Rational software tools [7]. The remainder of the work is the following. In section 2 we will discuss project architecture of the system. In Section 3 we will discuss about ETC project. In Section 4 we discuss conclusions and future developments. The ETC and OTRE Project OTRE Project can be briefly described as an experiment of a collaborative project centered around Eclipse community and tools; at the end, the results will be collected, published and shared, in a workshop in which the teams that will have collaborated remotely finally will have the opportunity of meeting themselves as well as comparing their work and results with other teams involved in the same project. Something like a peer-to-peer conference in the conference, that wants to be much more then a student contest. Each team of students will have to complete a group of tasks whose outcome will be the realization of an Eclipse plug-in, which will have to be integrated with the Eclipse plug-ins developed by the other teams. The OTRE project is part of the program Enforcing Team Cooperation using rational software tools into software engineering, an innovative academic project sponsored by the Italian branch of IBM, IBM-Italia. At the actual stage, a restricted number of Universities in Italy have been involved in the pilot project; the participants are the following: University Federico II Napoli, University of Milano Bicocca, University Alma Mater Bologna, University of Bergamo, University of Salerno, University of Genova and his Savona, University of Bari and his Taranto. The students involved, signed-up within the platform, are 322 and they all will work in this initial first semester. Including a dozen professors, 7 champion student. Each University has gone forming teams of students from different courses that is: software engineering courses from the Universities of Napoli Federico II, Bologna and Milano Bicocca; web design and development courses from the Universities of Genova and Bari; an advanced programming course from the University of Bergamo; an advanced design course from the University of Bologna. Heterogeneous and distributed teams will be composed, with students from different cities and with one teacher tutor for each. In addition, for each University, a champion student (usually a computer engineering or computer science PhD student) 92

103 will support the corresponding teacher. Moreover, the champion student is the responsible of the local group. One only computer engineering PhD student (administrator) has the technical direction for the overall ETC platform, both software and hardware. The topics addressed in the project will be directed to a re-engineering of the actual Eclipse Italian Community web site. New services will be designed and developed, with the objective of making the site itself the perfect place to repeat the experimentation after this first group of pioneers students and teachers. Future classes will participate, adding further functionality to the system, integrated with the ones already available. After the first running phase, the OTRE will represent a repeatable model so that new experiences can be done without the start-up period and with quick set-up time and a smooth learning curve. According to the specifications given by teachers, the newly realized Eclipse-IT portal will have to integrate audio and video contents, paying attention to accessibility and making multi-format information available to the visitors. In this present edition of the OTRE project, the activity will be centered around a set of modular components that will be designed and realized to manage events. Possible services will be: a live-conference system (live streaming of the talks given at the conference) news management live-blogging interactive boards supporting chat sessions and live discussions collaborative systems such as wiki for the documentation management (user, developer, and reference manual, documentation,...) as well for the organization of the conference agenda and the topics in a bar-camp style discussion boards for exchanging opinions sharing systems publication tools addressed to different targets such as scientific community, developer community, generic and specialist press user data and profiles management mash-up tools for the data fusion and aggregation a chairman system to manage submissions and review of the papers for a conference a repository and indexing system for the archiving of the work done and keeping track of interactions content management advanced search capabilities semantic systems with specific handlers for software libraries and the Eclipse plug-ins in particular. 93

104 2 The ETC Project Architecture Before the launch of the OTRE Project, several experiments have been carried on to the aim of finding the setup for a suited hardware and software system architecture, so to implement the ETC concept. The requirement is to build an open environment in which students are enabled to cooperate and innovate by developing projects within their University courses. As a first step, students are considered to work together on issues related to software engineering and web programming courses, but, in the future releases, the tested architecture, will be used for students to cooperate on different issues such as management, risk management, etc.. Within the proposed system, some Crossuniversities projects will be constructed in order to cooperate with the Eclipse Italian community [3]. The whole system is not so simple so, on account of the complexity of the architecture, this section is split-down into 4 sub-sections: the first will deal with the hardware architecture, the second with the software architecture, the third with the so called peopleware architecture, and the fourth will deal with the Knowledgeware architecture. The latter will explain more specifically how, in a open innovation network, students are able to cooperate, innovating in a sort of virtual Renaissance workshop. 2.1 The Hardware Architecture of ETC Project This section outlines the hardware architecture and the adopted software. The project is supported by a server provided by IBM Italy [2]. Thanks to a special academic/educational agreement, the server is on loan for use in the Italian community Eclipse [3] and, actually, it is running at the Faculty of Engineering of the University Federico II of Napoli. This machine has the following characteristics: Processor: x3650m2, Xeon 4C E W 2.4GHz/1066MHz FSB/8MB L3 Storage: 2x1GB, O/Bay 2.5in HS SAS, SR BR10i, CD-RW/DVD Combo Case: 675W p/s, Rack Memory: IBM 146 GB 2.5in SFF Slim-HS 15K 6Gbps SAS HDD (4) Power supply: Redundant 675W Power supply (1) The network connectivity is designed so that it can support a number of simultaneous accesses; for this phase of the experimentation a number between 150 and 200 is expected. Regarding the manner in which we plan to use this hardware architecture, we can illustrate with a simple example shown in Fig

105 Figure 1. Hardware architecture of ETC Fig.1 shows that both application server and database server, running on the same hardware with firewall, are managed by the Eclipse Italian community. All others connected universities can access to it through fast and multi-device links, completely independent from hardware and software platform. 2.2 The Software Architecture of ETC Project The software platform is the secret weapon of this project. It is a powerful platform for experimentation in which, standard tools of the IBM Academic Initiative are supported with some tools, always from the IBM, awarded for their innovation. The major innovation is to make people interact in sharing and development of software specifications.. Tools which will be based on collaborative and cooperative testing of student groups will consist of Rational Team Concert (RTC) of which contains an advanced architecture. RTC is based on Jazz; Jazz products embody an innovative approach to integration based on open and flexible services within Internet architecture. Unlike the 95

106 monolithic and closed products of the past, Jazz is an open platform designed to support any industry participant who wants to improve the software lifecycle and break down walls between tools. Jazz is not only the traditional software development community of practitioners helping practitioners. It is also customers and community influencing the direction of products through direct, early, and continuous conversation. Jazz project is composed of three elements: a portfolio of products designed to put the team first an architecture for the integration of life cycle a stakeholders community The Jazz portfolio consists of a common platform and a set of tools that enable all of the members of the extended development team to collaborate more easily. This reflects the central insight that the center of software development is neither the individual nor the process, but the collaboration of the team. RTC, RQM and RRC are based on Jazz (see fig. 2): Rational Team Concert (RTC): A collaborative work environment for developers, architects and project managers with work item, source control, build management, and iteration planning support. It supports any process and includes agile planning templates for Scrum, OpenUP and the Eclipse Way. Rational Quality Manager (RQM): A web-based test management environment for decision makers and quality professionals. It provides a customizable solution for test planning, workflow control, tracking and reporting capable of quantifying the impact of project decisions on business objectives. Rational Requirements Composer (RRC): A requirements definition solution that includes visual, easy-to-use elicitation and definition capabilities. Requirements Composer enables the capture and refinement of business needs into unambiguous requirements that drive improved quality, speed, and alignment. The software architecture of ETC is a peculiarity of the project as important to be flexible (applicable to each course of the trial) and should also be used by students from multiple levels (both three-years and master students). At the time of writing the same architecture of RTC is evolving in the laboratories of IBM Rational brand in Raleigh IBM Lab - USA. A very useful work in this direction has been presented for that purpose at Eclipse Italian convention from Rich Scott, Distinguished Engineer at IBM Rational Laboratories in Raleigh [4] which shows all the strategic evolution of this important cooperation tool. Meanwhile, a snapshot of the architecture can be seen in Fig.2. It is a hybrid between an architecturally layered (three tiers) and a service oriented architecture and it is based on the paradigm of development cooperation projects, thus has both the means for conception of projects for its implementation and verification of quality. 96

107 Figure 2. RTC architecture (thanks to Scott Rich from IBM) The architecture of the RTC can be considered innovative because it combines both the process of development and the work tools required to implement them. Especially it has built-in a number of models of the development process, including the agile one. It is considered that agile programming may be the subject of the trial, given the short time available for completion of tasks (9 weeks). In particular, we plan to take as agile process SCRUM 1 and OpenUP 2 as standard one. 2.3 The Peopleware Architecture of ETC Project The project ETC has a complex peopleware organization. It consists of nine universities (Bari, Bergamo, Bologna-AlmaMater, Genova-Savona, Milano-Bicocca, Napoli-Federico II, Salerno, Taranto) with 350 students registered and a champion team with 8 PhD students (one per seat), a manager of platform and system administrator and at least 15 academics (professors, researchers and responsible for the courses under experimentation). There is also the organization of courses locally developed in complete autonomy. Since each course needs to develop its own content, the organization also requires its instruments to be conducted. Not preclude the use of other instruments to be adopted for reasons that will emerge upon implementation of individual projects. 1 rt-1/index.html 2 and the Eclipse Process Framework available at 97

108 It is worthwhile to stress that a non-standard use of tools in different situations and contexts may be useful. In Fig. 3 is an example of association courses-tools that was made for the software engineering course after it agreed with all the venues where the course is held. Moreover, the same piece of mind map, developed collaboratively, shows a detail of the agenda, which, at the time of writing, provided a structured sequence of tasks, some critical issues and project status. It is worth to underline that this type of management activities is a way to better cooperate especially when there are multiple units involved and they are geographically spread. Besides, the instrument that can be effectively used by students while carrying out their duties, are meant to represent the evolution of their activities in an easily understandable framework. Figure 3. Association courses-tools and Agenda of ETC project Back to the course of software engineering (see Fig. 3) one can notice that it has associations (links) with the tools of the ETC platform for such course and, in particular: RTC is used to allow cooperation among students RRC/RRP is used for the definition and sharing requirements RSA (Rational Software Architect) is used for detailed architectural design Eclipse is used for Java development JUnit is used for testing RQM (rational quality manager) is used for test cases construction. This is an example of how the platform ETC consists, once the course is chosen. The "software engineering" course can now be used by the University of Napoli- Federico II, Bologna-AlmaMater and Milano-Bicocca. Another example is related to the course of web-based design applications. In this case, they are used both the RTC for cooperation among students, and the RRC/RTC for the composition of the requirements. For the development of web pages, the Eclipse platform can be used with the WTP plug-in, as in the reported case of Genova-Savona University. Finally, the course "fondamenti III", held in Bergamo, had the task of developing testing activities on existing Java code and then the platform has been customized using the following modules: ETC for the coordination 98

109 of groups and individuals involved in projects, RQM for the construction of test cases, and Eclipse with JUnit for testing. 2.4 The Knowledge-sharing Architecture of ETC Project In the authors' opinion, it is worthwhile to outline details on the network of knowledge that has been created in this project and the observation of the mechanisms of creation/dissemination and utilization of the knowledge itself. The project ETC has an open knowledge-sharing organization, due to the fact that it is based on the concept of open innovation network 3 in which many local virtual renaissance shops contribute to the growth of knowledge, shared and distributed in the network. The many projects inside the ETC project itself, are very diverse, but sometimes they require a series of common knowledge. Meanwhile important is the existence, in every venue of the trial of the figure, which we named champion student, of a student much more experienced than average students who are participating in the trial. Champion students have a PhD degree with typically five years experience in the field of engineering or a degree in the same sector. Also interesting is the relationship between work teams made up of champion students. In fact this team is of support for ETC project. Beyond that there is a figure even more experienced than team champion and is one who controls hardware and software system. An engineer who has the visibility and management of contents as well as the ability to manage existing services and to check that everything is in order. In this way the ETC test more closely resembles a distributed Renaissance workshop. And ETC, from this point of view, is the opportunity to organize and support projects, by activating the flow of knowledge as a possible mechanisms for facilitating exchange and interaction. Precisely for this purpose, a special team works right around the use of tools that enable the network to learn more easily through the study and development of tools (not in the software provided by IBM), but that may be needed to implement some courses. It points out that the tools that IBM Academic Initiative has made available to the ETC project, are used by experienced researchers, and that this is the first occasion in which they are tested in a university teaching. Clear evidence that the innovative tools are required is reported in Fig. 4, a solution to synchronize the information actually used and approved by the student team champion. The design team has created an Eclipse plug-in that provides alignment of documentation and the correct version for all 350 participants in the project. 3 Eclipse-Open-Innovation.pdf 99

110 Figure 4. ETC: web services component sharing documentation tool The figure shows the existence of different resources and their classification (ebooks, readme, tutorials, slides, etc.). In this way a student should no longer bother to search the resource but, simply launching the Eclipse RTC client, the student should just leave the xml file to update its plug-ins and will select the menu command View / ingsoftwareplugin contents to update the existing portfolio of educational resources. Note also that the plug-in turns into a web service more effective and efficient web search home made by the student. 3 The ETC Project: First Results and Comments The work of projects building between different universities was very important at this stage of ETC. It is designed to give space to locally relevant projects, so to preserve the traditions of its teaching but also aims to inaugurate a season of projects between universities that enable courses with the same name to engage projects involving students from different universities. Thus they were born intra-university and inter-university projects. Fig. 5 shows a view of RTC that, on the right side, shows the projects currently active in the ETC experimentation. These projects include: students, champion students, professors, and undergraduates, for a total of about 322 users. In particular, 6 out of the 11 projects surveyed have been conducted by more than one University. Three projects are local (Federico II) one is for a reengineering of the Eclipse Italian community web site [3] and another one is to support ETC itself for the development of new technologies. Another local project is built in order to study the knowledge management sharing of students involved in ETC (KM&KE) ECLI-IS is for supporting didactic in the software engineering courses and Ecli-Law is for, a little bit off-topic, a simulated environment for problem solving in the field of law. Bergamo University has also developed locally a number 100

111 of projects related to testing of existing software systems while the University of Bologna Alma Mater is under startups. Figure 5. ETC work item From Fig. 5 is also noticeable, left side, each project. As an example OTRE - Single-sign-in system, has its own team and each team consists of people who play different roles (but sometimes even the same role) in a default process for software development (in this case OPENUP). The process binds people to the manner and timing of activities and also provides a common management and versioning of all documentation produced by the process itself. In other words, no student, regardless of its geographical position has to worry about where the project data are, as these are automatically required where the project manager has decided to store documentation (even locally on a server). The research group at the University of Genova-Savona is carrying on a slightly different experimentation, working on a "web design" course for non-engineering students. In particular, the course is provided to students of Communication Sciences and the activity of "web design" has to be intended in the terms of system specification and functionality, not in the terms of software development. In this case too, the most important part of the work is strictly connected to inter-communication, interoperation, and cooperation. Students carry on a collaborative work of requirement gathering and analysis and the outcome of their project will have to be used to produce the input to the design phase of the web pages and services. Currently, students were given a questionnaire to identify and correct problems that have occurred because of the difficulty of the ETC platform. Let us point out that these students are not enrolled in an engineering course and then may have problems about the use of the platform. However, they have a much greater propensity to collaboration that is more difficult to find in engineering students. Would be 101

112 interesting cooperation between these two types of students in joint projects between universities. 4 Conclusions and Future Developments The aim of this section is to present the first results of the ETC project and commenting as well as outlining future developments. The idea of building an open environment in which students and teachers are enabled and encouraged to cooperate and innovate by building projects in courses in their universities has been very much appreciated. The project ETC leads many benefits, among which we recognize the following. The possibility of allowing students from different universities to interact on different projects on the same learning content (eg. software engineering courses). The opportunities given to teachers to cooperate on the same educational content by allowing them to more effectively convey the innovations. As a third benefit, related to innovation 4, the ETC represents an experience that helps to innovate and to show students what innovation is and how it is created through the use of software tools to the highest level. ETC project itself represent an innovation because it is a special environment in which, while in the building projects, include innovative ways to learn and to form new generations. Last benefit recognizable to ETC project is the special relationship with companies of high level and strong penetration in the field of software and hardware applied research. Thanks to cooperation with IBM Rational Italy in its brand and IBM Academic Initiative, the ETC project could see the light and it was the occasion for dozens of students for their first time to interact with high-level researchers in their industry, in other ways unattainable. An opportunity that initially shocked but, in a short time, they learned to interact easily. So, a practical way of bridging the University to the company in a very simple way: designing a training course together. Finally, it is noteworthy that the ETC project bears witness on the need of re-thinking the way e-learning resources are designed and built. Content has to be, increasingly, services over a network and this is an innovation in e-learning. No coincidence that the first network service of ETC is the project related to documentation that has been transformed into a web service, up to date and always online. This takes us to a lower installation time and to a better use of the environment of the ETC project, fruitful for both teachers and students due to the fact that, the documentation is regarded as a web service, and the average student does not lose most of time to search through the maze of network. On the other hand, the teacher has only one place with only one version of the documents made available. We believe that this new philosophy is the best productivity and the dissemination of concepts of time lowering metabolism. A team of undergraduates of the project is currently working on processing services also many activities to bring new evidence to the knowledge network involved in the project RTC. It is also noteworthy as ETC 4 Invention means creative action results of an aspect of problem solving, while innovation is its application, the invention refinement and its transformation into usable products. 102

113 is also a workshop for experimentation with innovative applications that can be tested once used the same tools as IBM to incorporate changes in the tools business. In the next year testing is thought to involve a larger number of universities up to a thousand students and a dozen universities with a dozen courses. References 1. Johnson D. W., Johnson R. T., E. Holubec, E.: Apprendimento cooperativo in classe Erickson (1994). 2. IBM: 3. Eclipse italian community 4. Rich S.: The Modern Platform for Software Engineering Tools, in Gargantini A. (eds) Proc. of the 4th Workshop of Eclipse italian community, Bergamo, Italy, 31-38, ISBN (2009). 5. Maresca P.: Projects and goals for the eclipse italian community, in Proceedings of Fourteenth International Conference on Distributed Multimedia Systems (DMS2008), Boston, editor knowledge systems Institute graduate school, USA, , ISBN (2008). 6. Coccoli, M., Maresca P., L. Stanganelli: Enforcing Team Cooperation: an example of Computer Supported Collaborative Learning in Software Engineering, in Proceedings of Seventeen International Conference on Distributed Multimedia Systems (DMS2010), Oak Brook, Illinois, USA October 14 to 16, 2010, editor knowledge systems Institute graduate school, USA, 2010, to be published. 7. Rational software tools. Available at 103

114 On the downscaling of the Jazz platform Experimenting the Jazz RTC platform in a teaching course Angelo Gargantini 1 and Guido Salvaneschi 2 Patrizia Scandurra 1 1 Università degli studi di Bergamo, 2 Politecnico di Milano Abstract. We believe that students should use collaborative and project management tools similar to those they will encounter in their professional life. Moreover, we believe that Universities could promote and encourage the use of best practices supported by efficient tools by teaching students their use. IBM Rational Team Concert (RTC) is a good candidate to be used as industrial tool in University courses, since it supports many best practices in a complete collaborative development environment providing planning, source code management, work item management, and build management and it has been already successfully used in industries and software houses. However, we experienced that RTC is too rich in the features it offers and this makes its adoption very expensive in terms of students and teachers time and effort. For this reason we have studied and present in this paper a simplified use of RTC which consists in reducing the concepts (at ontological level) and the features were offered to the students (at practical level). 1 Introduction In the first courses of computer science faculties students are widely exposed to the use of programming languages such as C/C++, Java or Python. In many cases the following courses give them a higher level approach through the study of design patterns, software architectures and processes and the other common topics of the software engineering field. The abstractions taught in these courses are fundamental for the ability of students of analyzing problems and be able to project suitable solutions. While these are the conceptual tools that probably will make the difference in their professional career, many of the tools which are widely used in the industrial production of software are usually neglected in graduate courses. This is the case of tools that support project planning and monitoring, task assignment and team collaboration, and process adoption enforcement. Sometimes even configuration management systems are presented only in a theoretical lesson and their use is not subsequently imposed as a standard practice in software development in university projects. On the other hand, we found that also many software companies, especially small enterprises like those operating near our University, are reluctant to adopt 104

115 2 best practices (like configuration management and structured collaborative environment) because they estimate that the cost of their adoption, even if supported by tools, will not compensate the gained efficiency. In this scenario, the Universities can promote the adoption of these best practices by teaching to the students the concepts at conceptual level and the use of tools at practical level, and in this way, lower the cost of their use. We believe that the adoption of a collaborative development environment can be an effective way of teaching source code management in parallel with planning and process monitoring topics. One of the issues of introducing these type of tools in a course is that they are usually dimensioned for industrial-size applications. Being conceived for the software development related to an entire product line, they allow for the management of several teams, multiple time lines and different releases. This complexity constitutes a barrier for students who have to deal with a slow learning curve in order to reach a point in which they can effectively take advantage of the tool. We have experimented the use of Jazz Rational Team Concert (RTC) [2, 6, 5] in a graduate project course which integrates some frontal lessons of advanced programming and software engineering topics. We spent a decisive effort in reducing the complexity of the tool by simplifying the ontological model of the entities to which the students are initially exposed. In our opinion this is essential to reduce the barrier to entry and make a tool like this suitable for an introductory course. While this process by no means made Jazz usage trivial, we succeeded in the goal of achieving a good familiarity of the students with the tool. This was mostly due to Jazz flexibility which allows for its adoption in huge projects but makes it usable with some care also for very small works. This paper is organized as follows. Section 2 provides a brief overview of RTC and its features. Section 3 presents the conceptual model of the RTC features and their simplifications we adopted. Finally, Section 4 reports some related work, while Section 5 presents some conclusions and describes our future directions. 2 Elements of collaboration in the Jazz RTC environment RTC is a collaborative development environment which provides support for project planning, source code and build management, task assignment, project health monitoring and reporting. The tool also integrates a process support. RTC is part of a suite of IBM tools for team collaboration such as Rational Quality Manager which is a test management environment for test planning, workflow control, tracking and reporting, and Rational Requirements Composer, an environment for requirements elicitation and definition. RTC is released either as a server-side repository with a web interface, a build system engine and as a development environment integrated with the Eclipse IDE [1] or as a VisualStudio [3] plugin. An RTC repository allows us to define different Project Areas to which users who collaborate to the same project subscribe. It is also possible to organize developers in separate teams assigning a Team Area to each of them. Tasks 105

116 3 (development activities or bug fixes, etc) are assigned to the users identifying Work Items. The overall project evolution is monitored through time lines which establish milestones and releases. In addition to the support for project management, RTC offers all the traditional functionalities of a configuration management system, such as versioning, patch application, conflict resolution and code locks. It is possible to configure a separated server for long-running builds, fully integrated with the RTC environment. As a general consideration, we observe that many aspects of RTC are thought for very large scale projects. For example, the support of many collaborating teams with different time lines for the same project is commonly used for parallel development, such as, for example, in activities where some developers are working on implementing the features of the next release of a product while other teams are involved in the fixing of bugs of the current product release. Another example of this large scale approach is given by the assignment of tasks: a developer retrieves its tasks by querying the system through a set of selection conditions. This schema applies well in an environment with a huge number of tasks and an analyst in charge of the project planning that assigns the different tasks to the developers and whose role is in principle different from that of the developers. 3 Downscaling the Jazz RTC environment In this section, we expose the choices that we made for adopting Jazz as a collaborative framework for a graduate project course. Since the tool is very powerful it exposes a huge set of entities with complex semantics and relations among them we decided to make a set of key simplifications in order to make it usable by the students with a reduced initial effort. Our starting point was the development of a metamodel 3 of the RTC concepts to adopt and simplify. We tried to simplify as much as possible relations among concepts, for example by reducing in some cases n-ary relations (with multiplicity 1..n) to unary relations. These relations simplifications are clearly documented in the RTC metamodel (see the following subsections) by grey boxes indicating the new cardinality. We removed non necessary entities or made their presence transparent to the students (e.g. by adopting a single default category when a hierarchy of nested categories could be defined). In the case we really removed an entity or a relation, we added in the metamodel a grey cross on the class or association representing the concept; instead, when we wanted only to make an entity a bit transparent to students, the entities are shown in the diagrams with an empty 3 In software engineering, metamodeling is the construction of a collection of concepts (things, terms, etc.) within a certain application domain. A model is an abstraction of phenomena in the real world; a metamodel is yet another abstraction, highlighting properties of the model itself. In practice, metamodeling implies the development of UML-like class diagrams to describe and analyze the relations between concepts. 106

117 4 cross on them. In some cases, a simplification is made by restricting the possible instances of an entity; this is only reported in the description text as well. We categorize RTC concepts in three perspectives: Work Organization, for concepts related to the developers work organization; SW Artifacts Management, for concepts related to the management activities of the SW artifacts being produced; and Development Process, for concepts pertaining to the development process. In particular, we adopted a simplified Agile Model Driven Development process [4] whose schedule organization was modeled inside Jazz. The concept of process in Jazz is somehow complex, involving a huge set of general choices such as defined roles, access rights associated to each role, a default timeline or available work item types. To this purpose, we exploit the Jazz Simple Team Process which allows team members to perform any kind of modifications denying anything to outsiders and defines a very simple set of default values which best suites our needs. 3.1 Work Organization Fig. 1 shows a fragment of the RTC concepts related to the Work Organization perspective. First of all, we decided to eliminate the concept of team area (see class TeamArea) which is used in Jazz for organizing a small group of developers who are in charge of a subset of the whole project. Since projects are quite small, we decided to remove this intermediate way of aggregating developers. So, in our case, each project is assigned to a project area and each team of students (users of the RTC platform) has a dedicated project area which is not accessible by other teams. In the general case, users can be associated to an area with different roles. The common application of this feature is to separate developers who have access to code modifications, and users who are in charge of planning and monitoring the evolution of the project allowed to modify plans and task assignments. We decided to make a strong simplification adopting a unique possible team member role (as instance of the Role class) which allows for every action inside an area. For this reason, in the metamodel portion shown in Fig. 1 the multiplicity of the association ends terminating on the Role entity are reduced to one. This essentially removes the concept of role in our schema, because each user who is assigned to an area has the team member role inside that area. Inside Jazz RTC, a work item is the way of keeping track of the tasks and issues that a team needs to address during the development cycle. Work items are associated to an area and keep a reference to the user who created them, while ownership indicates that a certain task is assigned to a specified user. Developer activities make the state of work items advance. To reflect this, work items have attributes such as time estimation in order to complete the task, a severity and priority evaluation, a state value such as new or resolved, and a type. We decided to keep the set of available work item types small, allowing only for the Defect, Task and Enhancement types. Finally, in Jazz it is possible to define work item categories inside a project area. Categories can be nested and organized in a hierarchy. The aim of categories 107

118 5 is to assign a work item to one of them; developers who are interested to a certain category subscribe themselves to it in order to be notified when the state of an item belonging to the category changes. We tried to make the concept of work item category transparent to the students keeping a default root category to which all the work items belong to. Usually, students are free to organize items in categories and trigger the notification mechanism on their own. Fig. 1. RTC metamodel: Work Organization 3.2 SW Artifacts Management Fig. 2 shows a fragment of the RTC concepts related to the SW Artifacts Management. Repository workspaces are used in Jazz as a remote copy of the developer s work. They can be used for backup and as a source for delivering a change set to a branch. A developer can own multiple repository workspaces inside the same repository. We limited each student to a single repository workspace, instead of the many ones that the platform could make available to him or her; this workspace is used both to backup the work and to deliver later a change set to a stream. A single stream acts as a central repository for all the students participating to the same project. This is related to the fact that we have adopted a single timeline (see next subsection 3.3) for the project, so the unique repository of the area is like an instantaneous photograph of the current state of development of 108

119 6 the project. This is quite different with respect to the complete functionalities of Jazz RTC which allows us the adoption of many streams to contain the same component (read: SW artifact) at different states of development (i.e. in different versions). It is common to divide a big project into several components, each of which exports a subset of the system functionalities and has an explicit interface and dependencies on the execution context. We decided to extremely simplify this by making each project constituted of only one component which contains all the functionalities that the project must implement. This choice basically removes the original concept of component from the platform, because each versioned file is assigned to the same component. We believe that this is a bit extreme choice, being this solution suitable only for very small projects; even if the use of components in RTC is discouraged because we believe that the presence of different components makes harder to understand the basics of code versioning at the beginning, students are encouraged to apply principles of componentbased development to their software artifacts, but this will be not traceable in RTC. However, experts students can in the end also exploit the RTC component concept taking advantage of this feature on their own. Changes to a component are represented in terms of change sets. While a developer modifies the code inside his or her local workspace, Jazz RTC keeps track of the differences between the local workspace and the (remote) repository workspace and adds these modifications to one ore more change sets. A change set is always associated to only one component. The developer can commit the change sets to his or her repository workspace, making it synchronized with the local workspace. With a successive delivery operation, it is possible to apply the change sets collected in a workspace repository to a stream, sharing the modifications with other users of the same stream. The history of the change sets applied to a component is stored inside a repository, so that the sequence of changes can be inspected and under certain conditions undo operations are also possible. In the RTC platform it is possible to take a picture of the state of development of a component as base point for further use/development by creating a baseline in the component s change history. When a component is created inside a repository it is initialized with an initial baseline representing the empty component before the addition of any change set. We decided to remove the concept of baseline from our model, as we considered it non essential for our purposes. Another concept we decided to remove is the concept of snapshot that is a repository object including exactly one baseline for each component in a repository workspace. This is useful for recreating a workspace configuration which is considered important. Snapshots and baselines can be delivered in the same way of change sets and in this sense can be seen as collection of change sets; this is the reason why in our diagram we have a relation between snapshots, baselines and change sets. 109

120 7 Fig. 2. RTC metamodel: SW Artifacts Management 110

121 8 3.3 Development process As it can be natural in an advanced course, we expect from the students a decisive analysis effort before the implementation phase. Students are required to create modeling artifacts that describe their architectural design choices and even envision several alternatives evaluating the advantages of each solution. For this reason we believe that a process centered on the use of high-level models in the UML style, for example, is the best suited for our teaching purposes. However, the modeling effort should be functional to the comprehension of the solutions and its result should be immediately applied. These considerations together with the small size of the teams and the ease of interactions between students lead us to the choice of an agile process that easily allows for simple management and rapid prototyping. The AMDD[4] process is the agile version of a model-driven development process. In a model-driven development process, like the OMG s Model Driven Architecture (MDA), extensive models are created and refined throughout the development to guide the implementation of the final source code (that is possibly generated, at least in part, in an automatic way from models). With AMDD, models are involved in an agile way by creating and using in particular phases only those modeling artifacts that are good enough to drive the development activity. AMDD is best suited for small and co-located teams, which is a common case of the students groups in an university project. We adopted a simplified version of the AMDD process also because we believe this can help in achieving our twofold goal: on one hand, we want to force the students at using a systematic approach to their work, on the other hand we want to provide them an agile methodology without excessive constraints. Fig. 3 shows the lifecycle of the AMDD-like development process we adopted. Each box represents a development activity. Essentially, it comprises the typical AMDD activities of Envisioning, Iteration Modeling, Model Storming and Development, organized as follows. The envisioning (iteration 0) includes two main sub-activities: Requirements Envisioning, for collecting and representing initial functional requirements (usually by text and use case models), and Architectural Envisioning, to produce an initial SW architecture model (in terms of UML component/deployment diagrams) following the component-based development concepts and principles (components, provided/required interfaces, hierarchical assembly, reuse, etc.). Then, an arbitrary number of iterations can follow. Each of such next iterations is organized in three (self-explanatory) phases: Modeling, Model Storming, and Development & Testing (in parallel). The time indicated in each box represents the length of an average session. The basic idea is that perhaps you model for a few minutes then code for several hours. In order to encode our AMDD process within the RTC tool, we considered the RTC concepts shown in Fig. 4. In the RTC environment, it is possible to create different timelines inside each area: in a big project, it is common to have different time schedules inside a project, such as the timeline for a stable release, and the timeline for a beta version of a next release. However, the goal of the students is usually to develop a stable version of their software for the deadline of 111

122 9 Fig. 3. AMDD development process adapted from [4] the exam, which can be easily achieved with a single timeline containing several intermediate milestones. Each timeline can contain several iterations, eventually nested, representing the progressive evolution phases of the project. Inside RTC, plans are used to manage work items in the context of the given time constraints. Plans are used to modify work assignments for team members, and, being synchronized with the status of the work items, they can be used to track the progress of the work. While it is possible to create many plans for each iteration, we limited this feature, choosing to associate a single plan for each iteration in the timeline. We also decided to associate plans only to the lowest level iterations in the timeline and to limit the type of plans that can be used. In fact, in the RTC environment it is possible to define not only iteration plans, but also plans associated to a team release, toaproject release, and others. This type of plans are useful for the high level monitoring of a project progress, after deciding to keep track only of certain work item types defined as top-level as describing high level tasks. We decided to simplify these aspects by keeping only the iteration plans which are used for the fine-grain monitoring of the progress of each iteration. The screenshot in Fig. 5 shows the RTC timeline of a typical project following our AMDD process. It can be intended as an instantiation of the metamodeling concepts represented in Fig. 4. Our AMDD process time schedule structures the timeline of the project. Students are encouraged to create the timeline in Jazz 112

123 10 RTC (see Fig. 5), filling in the tool the iterations and adding open work items to each iteration as a todo list for each phase of the project. Fig. 4. RTC metamodel: development process perspective. 4 Related work In this paper, we presented a down-scaling approach of the Jazz RTC platform experimented during a software design/programming course of the engineering faculty at the University of Bergamo. We adopted this technique for allowing students (divided in small teams) to develop and deliver the design/software artifacts of their project works (assignments for the final exam) in a collaborative and traceable manner. Though the proposed approach applied numerous simplifications to the original Jazz vision [5], it is advantageous for the obtained usability of the product and because it helps us to incorporate within the RTC tool software development best practices on agile planning, traceability, iterative development, management of different releases etc., exactly as we explained them during theoretical frontal lessons. Very few papers exist in the literature see, for example, [9, 8] on the use of the Jazz RTC tool at university courses to support teaching software engineering practices. These papers summarize the objectives and the most visible advantages (similar to ours) obtained using RTC from the teaching point of view; but, to the best of our knowledge, none of them explain clearly the concepts, simplifications and configurations adopted to downscale the RTC platform. Instead, 113

124 11 Fig. 5. The simplified AMDD process within RTC inspired from the work in [7], we illustrated, trough the metamodeling technique, in a transparent and systematic way the pruning effects of our downscaling approach for teaching purposes. 5 Conclusions and future directions In this paper, we presented the motivations and the work done for downscaling RTC in order to be used in an academic course. The effort required to understand where RTC could be simplified was remarkable but it has allowed a fair explanation of the use of the platform in a reasonable time. We hope that the students will explore other concepts and features during their autonomous use to prepare the final project of the exam. At the light of our experience, we found that starting from a very rich tool with a considerable number of features and try to simplify it, requires a greater initial effort than the scenario in which the tools is initially very simple and offers very few functionalities and new concepts and features are later added by means of plugins. We believe that if RTC offered initially a simple clean environment which could be enriched by plugins and auxiliary tools, it would have a greater spread especially among small companies which are not ready to embrace the entire jazz/rtc philosophy and which still require a long maturity path in order to adopt all the best practices supported by the tool itself. The proposed approach allows for its adoption in very small teams, but, as future work, we aim at refining/revising our technique to further dimension the RTC platform for students academic-size applications in order to allow for the management of larger teams by including specific development roles and multiple time lines. We also plan to conduct a series of interviews with students 114

125 12 to gather their impression and reactions to Jazz, and to use their experience to design our next iteration of our downscaling approach. References 1. Eclipse Project, 2. IBM Rational Team Concert, 3. Microsoft Visual Studio, 4. Agile Model Driven Development, 5. Cheng, L., Hupfer, S., Ross, S., and Patterson, J. Jazzing up Eclipse with collaborative tools. In Proceedings of the 2003 OOPSLA Workshop on Eclipse Technology Exchange (Anaheim, California, October 27-27, 2003). Eclipse 03. ACM, New York, NY, Frost, R. Jazz and the Eclipse Way of Collaboration IEEE Software, IEEE Computer Society, 2007, 24, P. Maresca, A. Cotugno, S. Mignogna, R. Longobardi, A. Donatelli, R. Gangemi. Business process Eclipse Editor (BEE) Proceedings of the 3rd Italian Workshop on Eclipse Technologies, Bari, Italy, November 17-18, A. Meneely and L. Williams. On preparing students for distributed software development with a synchronous, collaborative development platform. SIGCSE Bull. 41, 1 (Mar. 2009), Jaroslav Prochazka. How Jazz Rocks Teaching Iterative Software Development - Utilizing IBM Rational Team Concert at the University of Ostrava. CSEDU Proceedings of the First International Conference on Computer Supported Education, Lisboa, Portugal, March, 23-26, Volume 2, INSTICC Press,

126 Enhancing team cooperation through building innovative teaching resources: the ETC_DOC project Paolo Maresca 1, Giuseppe Marco Scarfogliero 2, Lidia Stanganelli 3 1 Dipartimento di informatica e sistemistica Università di Napoli Federico II 2 Dipartimento di informatica, sistemistica e telematica Università di Genova 3 IBM Italia S.p.A. Abstract. The fact that the didactics evolved to a new one, in the last decade, is a statement. This carries the need of a new way of teaching, using also the modern tools furnished by the technological progress.latest e-learning systems cannot neglet that their main users have got a basic computer education and generally large web experience. Our intent is to talk about a new concept of e-learning system, which utilizes the capabilities provided by the proficiency/deficiency theory to build personalized learning materials for each user, and to maintain updated the user profile. This environment, which has the aim to be a unique platform in which users can access to the lessons and teachers can build them aided by the intelligence of the system, derives from a mixed approach based on the modern web technologies to realize its community and lessons sides and on the Rational Team Concert - Jazz collaborative software to build its practical and team working side. In a collaborative environment there is a need to provide students with innovative learning resources that support learning. These resources can be constructed from other available resources or even may be prepared by the community itself. This paper discusses these issues. 1 Introduction The process of learning a particular concept or topic is very subjective: time and learning styles may vary from person to person, depending on their backgrounds, previous knowledge, environmental and psycho-physical factors. So it may happen that, in a given group of people, someone may complete the same learning activities in less time than the others, or that the same person in different time periods, in different environmental conditions, can takes different time or may need a different amount of information to achieve the same result. Although this is well known, the traditional teaching supports (textbooks, lecture notes, articles, encyclopaedias, etc..) move in the totally opposite direction: in the case 116

127 of encyclopaedias or treated, there is the tendency to treat exhaustively the particular subject leaving the reader to select the parts necessary for its formation. In the case of books or articles, writer decide which information he considers necessary for the understanding of a topic, this typically allows the reader to learn faster and to have a quickly focus on issues of interest, but the information provided may not be really useful for the training of the person, that must complete the entire reading of the document to find out the problem, even more evident in the case of multimedia sources such as audio, video or interactive tutorials. It's obvious that this process is not optimized and in each case the student must spend part of his time and his energies to select from available sources the parts essential to his education and build his own set of information that enrich his learning. In the case of educational resources to provide a community of practice it is difficult to think that those used for individual learning can still work well. While writing an ongoing project [1] involving seven Italian universities and supported by IBM is started. In its exactly one of the key problems is to observe and promote the growth and learning of a community of practice around common educational objectives. The purpose of this project is to use Web 2.0 and modern mash-up technologies to retrieve content and to build a customized learning material for each user around particular topic of interest, in order to do another small step towards Web 3.0. The system must be able to determine the level of knowledge of an individual, by capturing information through user interaction with the system itself, profiling him. The system has to adapt the contents to display to the user during his training depending on his level. We re talking about a system of e-learning fully adaptive and intelligent, which aim is to produce training materials customized for each user, which is able to minimize learning time and energy spent for this purpose and in the meantime gives the teachers all the necessary tools to manage courses, lessons, learning materials and virtual classrooms or working groups. This paper aims to discuss these issues and begin to outline a system, named ETC_DOC, for producing educational resources suitable for these purposes. In particular section 2 will deal with the system specification. Section 3 will show architecture of ETC_DOC system. Section 4 will deal with a view of technology choices available today. Section 5 will show a learning resource available to the community ETC, discuss the first results, conclusions and future developments. 2 System specification: introduction. The system must be able to provide to a Professor the ability to organize his online courses, managing his own personal knowledge base, made up of documents and learning materials, and to share them with the other Professors in order to build a shared knowledge base in which everyone could find selected quality material to build his lessons. The Professor has the burden to build the every lesson of his courses, using the knowledge base, having as target audience a prepared learner; the system must aid the professor to enrich his lesson with additional contents to reach less prepared people. 117

128 Another important feature for the Professor is the possibility to create workgroups and to enhance the collaborative team working on concrete projects, giving roles and responsibilities to participants. The system, instead, must to allow a Learner to participate to a course, starting from a single lesson. The difficulty and the deepness of the contents of the lesson depend on the preparation level of the student, stored in his profile. The system must acquire and abstract information about the learner basing on his behaviour in the learning process and then save them in the user profile, in order to obtain an adaptive e-learning system that can provide the user with the best learning experience he can have. The learner can also belong to particular teams or working groups, and through the system he must be able to work with the others and to build complete his tasks. The last actor is the System Administrator, who can manage the user list, giving users all the authorization they need basing on their role and their involvement, and who can define the list of criteria used to evaluate the preparation level of the Learners. The system we're going to build must then be incorporated into that used by the ETC project and then be easily integrated through technology eclipse. System specification: Use Cases. From the previous description many use cases can be derived for the different actors. In particular we can define: Log-in All actors. The user tries to access the system, that requires an authentication to give the user the right tools depending on his role. Course Management Professor. The teacher access to a course management environment, in which he can build his e-learning courses, organizing every single lesson they contain. This involves the definition of the arguments and the minimum set of documents that compose each lesson, choosing them from his own set of educational materials, searching the knowledge base shared between teachers or following the automatic suggestion given by the system. The teacher should be able to define the criteria of proficiency/deficiency for each lesson by implementing automatic enrichment of documents done by the system in order to define learning material for each level of proficiency / deficiency. Learning Material Management Professor. The teacher interacts with an environment in which he can build his own set of documents constituting the teaching materials with the possibility to share them totally or partially with other teachers in order to build a shared knowledge base. The teacher should then be able to detect possible topics that can be enhanced to gain a deeper detail level within the documents using selection mechanisms. These points are used as anchor to build richer documents by the automatic enrichment system facility, in order to build lessons for a lower level of proficiency. The professor can guide this process and save the documents. Teams and Virtual Classrooms Management. Professor. 118

129 The Professor creates projects and defines working groups of learners: a collaborative environment, in which share knowledge. These activities can be performed in conjunction with the platform Rational Team Concert (RTC) as we show later. Log-in Courses Management Learning Material Management Professor Teams and Virtual Classrooms Management Fig.1 Professor s Use Case Diagram Courses, projects and Lesson Management Learner. The user can search through the courses offered by Professors and join one of them. Depending on the particular course the Learner could access to particular learning material, which difficulty is determined by the proficiency level of the student for the particular matter as stored in his profile. The system must apply all the automatisms relative to the chosen proficiency criteria to understand and modify the proficiency level of the user. These activities can also be performed in conjunction with the platform Rational Team Concert as we show later. Team Working and Virtual Classroom Management Learner The user can manage his participation to virtual classrooms and projects. He can access to a tool for knowledge sharing and team working. These activities is performed in conjunction with the platform Rational Team Concert. Log-in Courses and Lessons Management Learner Team Working and Virtual Classroom Management Fig.2 Learner s Use Case Diagram User Management Administrator. 119

130 The administrator has access to a panel for the user management and the definition of roles, that allows him to define the Professors and the Learners. Proficiency / Deficiency Criteria Management Administrator. The actor defines the list of the available criteria and can handle reference to the implementation of new criteria, in order to expand this list. Log-in User Management Administrator Proficiency/ Deficiency Criteria Management Fig.3 Administrator s Use Case Diagram 3 A high level description of the system From a preliminary analysis of the system we can imagine its architecture composed by the following components/sub-systems. Web portal: represents the backbone on which all actors have to access in order to perform all the actions they can do. An access management control system identifies the user type and its role and adapt the environment to him. Information Sources and Document Management System: this system must allow the Professors to archive and index educational materials taking them from the direct upload or from web trusted sources. The teachers can define for each document pieces on a particular topic or related to a particular area, defining metadata. System for the management / delivery of courses: the course management system should enable the teacher to create and manage courses, defining all the lessons, using the documents contained in the shared Knowledge Base in order to build the minimum set of contents for an high proficient student. In the construction process of the lesson using the learning material, the teacher should set enrichment points for greater deficiency levels, that the system must aid to fill with more material. Each point of enrichment is therefore represented by a content selected by the teacher and its correspondent level of deficiency. The system then must allow the teacher to select the criteria for proficiency / deficiency to be associated with each lesson, according to which the system itself should provide personalized lessons. The same system should enable a student to enroll in a course and to follow the lessons according to his level of proficiency / deficiency. Virtual Classroom and Team Working System: an environment in which the Learners can work on concrete projects, organized by Professors. The system must provide a collaborative environment in which roles and task can be assigned to single person or to workgroups. 120

131 Some of these components are in part already contained in the architecture of ETC and is superimposed on this architecture ETC_DOC used to complete the construction aspect of the documentation and management of communities of practice not in terms of the project (including deals RTC), but training and development of learning resources and exchange of knowledge. 4 A look ahead to the possible technological solution The actual panorama of technological possible solutions offers a wide range of opportunities, the big dimensions of the system and the diversity of the sub-system suggest the possibility to use different solutions to realize the different components. The most valid seems to be the following: A Web Portal to realize the main interface of the system to the end user, linked to an access management system, like an LDAP server. Mashup technology to realize advanced web 2.0 features and to manage data and metadata of the learning materials. Eclipse and Jazz platform for the collaboration and team working feature required by the Virtual Classroom and Team Management subsystem. Follows a deeper explanation of some of these technologies in order to show their full potentiality for the project. 4.1 Mashup The word mashup recurs in some different context, assuming each time a different meaning; for example: In music, a mashup is a song or composition created by blending two or more songs, usually by overlaying the vocal track of one song seamlessly over the music track of another. In cinematography, a video mashup is the combination of multiple sources of video, which usually have no relevance with each other, into a derivative work often lampooning its component sources, or another text. In digital production, a digital mashup is a digital media file containing any or all of text, graphics, audio, video and animation drawn from pre-existing sources, to create a new derivative work. From these definitions, although the clear difference among the objects they define, is evident that the basic concept is the same: to mix and mash some objects to obtain a new, derivative and complex one. Mashup arises exactly from this idea. One of the first example in this way was the work done by John Snow during the colera outbreak in London during He placed on a city map all the deaths and all the water pumps and analyzing the information built with this mashup, he understood that the water was the main vehicle of cholera. Within the computer science the word mashup assumes another new meaning: A mashup is a lightweight web application, which allows users to remix information and functions belonging to different sources and to work with them to 121

132 build software in a completely new, simple and quick way. The user can effectively shape their business process reaching a result that is impossible to achieve with older technologies. Mashups stands on the fundamental concept of data and services integration; to operate in this way there are three main primitives: Combination, Aggregation and Visualization. The first allow to collect data from heterogeneous sources and to use them within the same application; the second primitive allows to operate on collected data having a measure and building new information starting from them; the last is used to integrate data in a visual way using maps or other multimedia objects. In a technological view of the Mashup and of its data and services integration problem, the natural representation of the problem itself can be obtained using a level/pyramidal approach see fig. 4. Fig.4 The Mashup Pyramid In the lowest abstraction layer there are Data Feeds and web technologies involved by them. They represent a good solution to access to updated data in a quick and secure way. In the immediately superior level live the APIs, used to obtain data dynamically and on demand services. A great level of abstraction is achieved by Code Libraries, that can be thought as application frameworks and API packages built to resolve some kinds of problems. Code Library level stands the Gui Tools level, made of widgets and technologies related to the composition of small graphical applications to show data or to allow the access to a service. On the top of the pyramid there s the Platform level, composed by all the tools and platforms that support mashup applications building, allowing to compose single graphical elements and lower level data. 4.2 Eclipse and Jazz 122

133 Jazz is an open platform which aim is to provide a frictionless work environment that helps teams collaborate, innovate, and create great software. To that end, Jazz focus on driving fundamental improvements in team collaboration, automation, and reporting across the software lifecycle. Its convergence with the eclipse platform makes it a really useful solution. Jazz is an IBM initiative to help make software delivery teams more effective, Jazz transform software delivery making it more collaborative, productive and transparent. As we can read on website, the Jazz initiative is composed of three elements: 1. An architecture for lifecycle integration Jazz products embody an innovative approach to integration based on open, flexible services and Internet architecture. Unlike the monolithic, closed products of the past, Jazz is an open platform designed to support any industry participant who wants to improve the software lifecycle and break down walls between tools. A portfolio of products designed to put the team first The Jazz portfolio consists of a common platform and a set of tools that enable all of the members of the extended development team to collaborate more easily. 2. A portfolio of products designed to put the team first Rational Team Concert is a collaborative work environment for developers, architects and project managers with work item, source control, build management, and iteration planning support. It supports any process and includes agile planning templates for Scrum and the Eclipse Way. Rational Quality Manager is a web-based test management environment for decision makers and quality professionals. It provides a customizable solution for test planning, workflow control, tracking and reporting capable of quantifying the impact of project decisions on business objectives. Rational Requirements Composer is a requirements definition solution that includes visual, easy-to-use elicitation and definition capabilities. Requirements Composer enables the capture and refinement of business needs into unambiguous requirements that drive improved quality, speed, and alignment. 3. A community of stakeholders Jazz is not only the traditional software development community of practitioners helping practitioners. It is also customers and community influencing the direction of products through direct, early, and continuous conversation. Moreover, Jazz defines also three main objectives: 1. Collaboration Jazz tools reflect the insight that the center of software development is neither the individual nor the process, but the collaboration within the team. It also recognizes that the team extends beyond the core practitioners to include everybody with a stake in the success of an initiative. A goal of the Jazz initiative is to enable transparency of teams and projects for continuous, context-sensitive collaboration that can: Promote break-through innovation Build team cohesion Leverage talent across and beyond the enterprise 2. Automation 123

134 A goal of the Jazz initiative is to automate processes, workflows and tasks so that organizations can adopt more lean development principles at the pace that makes sense for them. The Jazz initiative endeavors to: Improve the support and enforcement of any process, including agile processes Reduce tedious and time-consuming manual tasks Capture information on progress, events, decisions and approvals without additional data entry 3. Reporting The Jazz initiative is focused on delivering real-time insight into programs, projects and resource utilization to help teams: Identify and resolve problems earlier in the software lifecycle Get fact-based metrics -- not estimates -- to improve decision making Leverage metrics for continuous individual and team capability improvement 5 First results, conclusions and future development for ETC_DOC project The construction of useful resources for a community of practice can be a very complex activity, as regards the community of the ETC project, whose aim is to cooperate in developing software projects on common themes, one of the problems is just to keep updated the available documentation. The community consists of about 300 students, 7 teachers and 7 students champion (Student Coordinators). For this reason it is necessary to have a coordination of resources to be made available to the community so everyone can see the latest updated resources without having to search for hours on the web. The following shows the plug-in that has been achieved and that the update addresses the issue of coordination and dissemination of resources to all students. Resources can be of different types, as seen from the figure, and should not handle any versioning mechanism that is also run server side. What you see from the figure is an Eclipse plug-in that integrates seamlessly into RTC (Jazz) and the student sees his client by the RTC. Therefore he must not leave its development environment to access documentation and learning resources that enable him to work with proficiency. This is an example of prior proficiency. As state of the art working group are managed using RTC following a snapshot showing this activity. One simple project has been realized (prelude) A cross university project has been conceived (OTRE) UNiNa, UniGe, eclipse community As future development we can state these: The experimentation will continue until March 2011 First report of results to 30 th September 2010 during Eclipse IT 2010 ETC and ETC_DOC is supported by the IBM Academic Initiative and IBM Rational brand. Among the projects in which they are involved is worth to mention a cross-universities project called On the Road to Eclipse-IT 2010 (OTRE). We think that this paradigm is a practical way to design (and improve) the teaching of computing, a way that uses Experience of communities of practice in order to reach Internationalizing the interaction of students too, a practical way to implement projects with industrial partners of the first order. In other words a fun way to learn. 124

135 Fig.5 Eclipse Team cooperation (ETC) References [1] P. Maresca, L. Stanganelli, Enforcing team cooperation using rational software tools into software engineering academic projects (ETC)", in INGRID 2010, May 2010, Poznan, Polonia. [2] Maresca P. (2009) La comunità eclipse italiana e la ricerca nel web3.0: ruolo, esperienze e primi risultati, in Mashup meeting,salerno. [3] P. Maresca, G.M.Scarfogliero, L. Stanganelli Eclipse: a new way to Mashup, in Proceedings of Fifteenth International Conference on Distributed Multimedia Systems (DMS2009) Distance Education Workshop, San Francisco Bay, USA, editor knowledge systems Institute graduate school 3420 main street, skokie, Illinois, 60076, USA, September 10 - September 12, 2009, pp , ISBN X. [4] Rich S., The Modern Platform for Software Engineering Tools, in Gargantini A. (eds) Proc. of the 4th Workshop of Eclipse italian community, Bergamo, Italy, 2009, 31-38, ISBN [5] IBM developerworks Mashup section, [6] Duane Merril (2006), Mashups: The new breed of Web app, [7] IBM website, [8] Jazz website, 125

136 Enforcing Team Cooperation using Rational software tools: merging universities and IBM effort together Giorgio Galli 1, Ferdinando Gorga 1, Paolo Maresca 2, Carla Milani 1 1 IBM Italia S.p.A. Circonvallazione Idroscalo, Segrate (MI), {Giorgio_Galli, Ferdinando_Gorga, 2 DIS, Università di Napoli Federico II, Via Claudio 21, Italy, Abstract. La disponibilità di architetture mutuate dall esperienza Internet per il mondo dello sviluppo software, veicolate in particolare dall architettura Jazz, ha fatto nascere l idea di un progetto sperimentale di utilizzo delle discipline dell ingegneria del software in un ambito collaborativo universitario che consenta agli studenti e ai docenti la migliore fruizione delle modalità di insegnamento nelle università, coniugata alle nuove possibilità di collaborazione e condivisione intrauniversitaria. Questo documento riporta le idee, la vision, i risultati ottenuti e gli sviluppi futuri del progetto denominato Enforcing Team Cooperation using Rational software tools (ETC per brevità) e descrive la collaborazione tra il mondo universitario italiano e IBM, nell ambito delle attività dell Academic Initiative. Keywords: E-learning, Eclipse and Jazz Technologies, C/ALM. 1. Introduzione, Contesto e motivazioni Lo scenario nel quale questo lavoro viene a costruirsi ha perlomeno tre prospettive che di seguito esamineremo nel dettaglio: il dominio dell'università, il dominio del mondo del lavoro, il dominio delle aziende. 1.1 Università Il panorama formativo universitario italiano, sebbene sia molto vario e distribuito sul territorio, presenta numerose sfide e margini di intervento soprattutto a supporto di tecniche e strumenti avanzati di insegnamento. A far fronte a queste esigenze ci sono molto spesso i docenti, alimentati dalla competenza, dalla passione per l insegnamento e da un sano spirito patriottico, ma 126

137 tristemente stretti tra la morsa dei tagli di budget e le necessità di una didattica premiante. Gli studenti a loro volta necessitano del miglior contesto didattico per far fronte alle loro aspettative per la loro preparazione e per il futuro, ricercando talvolta la più efficace esperienza universitaria con auspicabili sbocchi sul mercato del lavoro. Sebbene questo panorama possa essere considerato tutto sommato sufficientemente normale per le facoltà di carattere umanistico e storico, appare invece ancora insufficiente quando consideriamo facoltà scientifiche, in un confronto internazionale. Queste facoltà ovviamente sono le più dipendenti dall obsolescenza delle tecnologie e dalla disponibilità di macchine e strumenti per il futuro lavorativo dei futuri professionisti e sono le migliori candidate ad una collaborazione con l industria. Il contesto universitario è anche condizionato dalla riorganizzazione didattica universitaria recentissima che spinge a riaggiornare i corsi ed i materiali conseguenti (nel giro di 5 anni si sono avvicendate ben tre riforme degli studi universitari) ed i relativi laboratori per essere allineati ad un sapere comune ed uniforme: lingua franca per la portabilità delle competenze in Europa. In questo contesto l offerta formativa italiana, pur completa, si mostra lenta ad adeguarsi autonomamente sia alle riorganizzazioni che soprattutto alle necessità di confronto ed allineamento internazionale, essendo sempre difficoltoso attualizzare le potenzialità del corpo docente e le energie del corpo studentesco verso il raggiungimento di una didattica competitiva. 1.2 Mondo del lavoro L attuale panorama lavorativo internazionale, che vede come necessarie competenze di base quali: la padronanza di lingue straniere, la padronanza di metodi e tecniche all avanguardia, la capacità di innovazione precipua della mente giovane preparata per la quale le aziende nutrono grandi aspettative, sottolinea maggiormente la necessità di gestire la didattica scientifica nel modo migliore possibile. Molto spesso le aziende in crescita, candidate datrici di lavoro, sono multinazionali e questo implica delle capacità aggiuntive di espressione (conoscenza delle lingue), di lavoro (collaborazione in team distribuiti e virtuali), unite ad una generale disponibilità a trasferirsi all estero o ad interagire per periodi più o meno lunghi con colleghi stranieri. La capacità di lavorare concretamente in un team viene di rado esercitata nelle aule e nei laboratori universitari sia per motivi di compressione dei corsi che per ragioni di modalità di apprendimento adoperate per la formazione. Vale la pena di sottolineare che le attività di apprendimento, nella maggioranza dei corsi erogati frontalmente, sono competitive piuttosto che cooperative/collaborative. Questo vale a maggior ragione per la cooperazione a livello infrauniversitario: lo studente non ha nessuna possibilità di lavorare con altri studenti di altre università durante un corso anche applicativo. Fortunatamente queste ultime osservazioni valgono in generale per tutte le università e non sono una pecca solo italiana, semplicemente sono frutto della stessa 127

138 inevitabile struttura universitaria. Riuscire però in qualche modo a superare questi aspetti porterebbe la didattica italiana a livelli di eccellenza internazionale, collegandola alle modalità operative di apprendimento adoperate dai ricercatori di aziende abituate a competere quotidianamente a questi livelli di eccellenza. 1.3 Aziende - IBM Il mondo del lavoro è molto attento a cosa succede al mondo universitario per vari motivi. I nuovi professionisti saranno le persone che in futuro decideranno gli investimenti nelle aziende e saranno le persone che consentiranno alle aziende stesse la sussistenza e la crescita (saranno i futuri dipendenti e dirigenti, nonché futuri clienti o investitori). Questo significa che aiutare a formare professionisti migliori potrà soltanto giovare all ecosistema lavorativo industriale italiano. IBM da molto tempo sta profondendo energie per supportare al meglio il mondo accademico in tutti i paesi in cui è presente. In Italia ha approntato un team di eccellenze che, parallelamente al lavoro usuale, si dedica al supporto delle università, offrendo lezioni, seminari, vision, materiale didattico e sistemi software. Queste iniziative ricadono sotto la denominazione di IBM Academic Initiative. 2. ETC: Enforcing team cooperation project using Rational software tools: merging universities and IBM effort together 2.1 L idea e la nascita la vision Il progetto ETC è nato da una chiacchierata fra il prof. Paolo Maresca e uno degli autori, Ferdinando Gorga, durante la quale si prospettava l utilizzo di strumenti di collaborazione all interno del normale ciclo di studi degli studenti di ingegneria del software del corso di studi di ingegneria informatica. Da un idea embrionale, in pochi giorni il progetto si è formato in tutta la sua importanza, grazie anche al contributo degli altri ideatori. Potremmo riassumere con il seguente elenco di caratteristiche le esigenze che ne hanno sancito l ideazione: 1. Necessità di supportare l insegnamento delle discipline di ingegneria del software (ma anche di altre discipline di sviluppo) con strumenti di qualità adeguata, professionali e di utilizzo industriale. 2. Possibilità per gli studenti di utilizzare tecnologie di progettazione, sviluppo, testing e controllo di qualità all avanguardia. 128

139 3. Sperimentazione in un caso di studio universitario di strumenti professionali aziendali. 4. Riuso dei paradigmi di comunicazione utilizzati in Internet (chat, collaborazione, web 2.0, social web, mashups). 5. Concreta possibilità di esercitare i ruoli, come cappelli di colore diverso, indossandoli durante l esecuzione dei progettini dei corsi. L esperienza di sentirsi davvero nei ruoli dei progetti industriali come analisti, progettisti, sviluppatori, tester ecc è indispensabile nello sviluppo del software e lo studente medio difficilmente li esercita tutti durante il proprio corso di studi. 6. Poichè i corsi, per ovvie ragioni, coprono un arco temporale di alcuni anni per fornire tutte le competenze necessarie relative ai ruoli citati, è nata l esigenza che questi strumenti dovessero abilitare gli studenti ad interagire tra loro, anche se non appartenenti allo stesso corso, magari ciascuno con il ruolo di cui stesse studiando in quel periodo temporale la disciplina. Questa metodica si pensa accresca il trasferimento di conoscenza perché incoraggia un interazione collaborativa fra gli studenti e consente la creazione di un team verosimile, potenzialmente completo di tutti i ruoli necessari, personificati dagli studenti dei vari corsi. 7. Estendere il concetto precedente ad una possibile interazione a copertura dell intera popolazione scientifica studentesca italiana tra le varie università è stato un passo molto breve ed ha portato a voler abilitare più di una università a questo tipo di esperienza. 8. Durante il consolidamento di queste idee è stato evidente che l università di Napoli Federico II (coordinatore del progetto) non riusciva ad ottenere in tempi brevi un server che potesse supportare questo progetto ed il management IBM si è offerto di poter risolvere il problema, instrumentando il progetto con un server IBM di potenza adeguata. 9. Parallelamente alla definizione di queste esigenze, la comunità scientifica dei docenti veniva coinvolta dal prof. Maresca e nascevano le adesioni all iniziativa, consensi e nuove idee. 10. Dopo la partenza della sperimentazione, veniva utilizzato il concetto di call for project che descrive la possibilità per le università di pubblicare i ruoli per cui c è bisogno di copertura nei progettini didattici e per lo studente di un università partecipante di proporsi come employee nel ruolo che gli necessita. In questo modo si amplifica la responsabilità dello studente nei confronti della sua formazione, si fanno progettini che non sono monchi di competenze, in generale si consente a tutti gli studenti di vivere con anni di anticipo l esperienza di lavorare in team di sviluppo industriali, che è l obiettivo centrale di tutta l iniziativa. 11. Successivamente sono stati definiti i primi progetti ed a questi assegnati ruoli e studenti e sono stati costruiti i primi materiali didattici del tutto nuovi che forniscono una visione della complessità della piattaforma in maniera light. Questa attività ha generato un progetto speciale a supporto del progetto ETC che ha il compito di studiare le innovazioni da inserire in ETC per il suo funzionamento e per la sua fruizione su scala nazionale da parte di un numero sempre più numeroso di docenti, studenti e ricercatori universitari, 129

140 nell ottica soprattutto della diffusione e di ampliamento delle idee veicolate dal progetto per gli anni futuri. 12. Collateralmente a quanto ideato, si desiderava anche ottenere un ecosistema che potesse servire da incubatore per estensioni future, non limitante ma amplificatore di idee innovative per la didattica e per il contesto universitario. 13. Da parte di IBM c era la volontà di creare un esperienza innovativa e unica che potesse valorizzare sia il lavoro universitario sia le caratteristiche della nuova piattaforma Jazz, nata proprio per abilitare team eterogenei e distribuiti ad una feconda collaborazione. L elenco di queste esigenze principali può di fatto essere considerato l essenza della vision che ha fatto nascere il progetto e che lo sta facendo crescere. 2.2 Università partecipanti Al momento hanno aderito al progetto sette università italiane, sia del Sud che del Nord; la tabella 1 mostra il numero di utenze di tipo studente (317), ma ad esse vanno sommate: (i) le utenze dei docenti, in media 2 per ogni sede (a Napoli ce ne sono 8, quattro per ogni corso e relativo collaboratore), (ii) le utenze dei champion student (1 per ogni sede (7), (iii) le utenze del team di supporto a ETC costituito da tesisti (15), (iv) le utenze dei consulenti IBM a supporto delle iniziative (3), (v) alcune utenze di prova e di servizio per l'amministrazione. Tabella 1. ETC. Università Italiane coinvolte e numero utenti (solo studenti). Università Corso Numero utenti Federico II - Napoli Ingegneria del software 176 Milano Bicocca Bologna Bergamo Ingegneria del software Ingegneria del software Informatica III Genova-Savona Laboratorio di web design 12 Bari-Taranto Progettazione Web 2 Salerno Progettazione Web 5 Il totale mostra un numero di utenti pari a circa 350. Lo scenario delle università che hanno aderito è variegato per il contesto nel quale la sperimentazione vorrebbe essere condotta. Ad esempio ci sono tre università che hanno il loro focus sul topic dell ingegneria del software e sono le facoltà di ingegneria dell'università di Napoli Federico II, di Milano Bicocca e di Bologna. Esse fungono come punta di diamante anche per esplorare processi, documentazione e strumenti per l'armonizzazione dell'ecosistema ETC che è formato da altre università il cui focus è direzionato verso corsi di sviluppo WEB (Genova, Savona, Salerno) o che hanno una spiccata professionalizzazione verso una fase peculiare dell'ingegneria del software (Bergamo) quale il testing. Si potrebbe pensare a prima vista che questi mondi siano fra loro distinti e distanti invece abbiamo osservato che, in questi pochi 130

141 mesi, le interazioni fra le università è stata intensa. Ad esempio la collaborazione fra studenti allievi ingegneri (UniNa) e allievi di scienze della formazione (UniGe) è stata molto proficua. Per la prima volta studenti di dottorato di ingegneria hanno formato allievi di scienze della formazione su piattaforme così complesse ricevendone anche feedback interessanti su alcuni upgrade dal versante comunicazionista per il quale gli allievi delle scienze delle comunicazioni sono più bravi di quelli di ingegneria. 2.3 Studenti e utenti partecipanti Il progetto ha destato notevole interesse fra i docenti e gli studenti universitari. Il loro numero in soli tre mesi è cresciuto non solo nelle sedi ma anche come attività di collaborazione fra questi cominciando a disegnare una concreta knowledge network. Attualmente il progetto ETC e la sua piattaforma hardware, sta ospitando le attività di oltre 350 studenti, ricercatori, docenti e champion censiti come utenti. Un significativo supporto all'esperienza viene fornito dagli studenti champions: studenti di dottorato delle università che vengono formati per fungere da coordinatori e da mentori per gli studenti più giovani. Ve ne è uno per ogni sede ed ha stretta interazione sia con il proprio docente che con gli studenti e con i consulenti IBM. Comincia ad apparire sullo scenario del progetto anche un fenomeno di knowledge sharing nel quale studenti smaliziati cominciano ad offrire la loro collaborazione come attori del processo software. In altre parole i migliori designer o specifier si prestano per la formazione e l'aiuto di altri studenti che svolgono la stessa funzione in altri progetti. Questo aiuta a disseminare più rapidamente le specificità della piattaforma che inizialmente risulta notevolmente ostica essendo connessa alla sintesi di metodologie e tecniche che derivano da numerosi esami nell'ambito del raggruppamento disciplinare universitario di Sistemi di elaborazione delle informazioni (ingegneria del software, basi di dati, programmazione ad oggetti, sistemi operativi, reti di calcolatori, etc). 3. Descrizione di ETC Il progetto ETC Enforcing Team Collaboration with Rational tools è un progetto intrauniversitario che abilita le università ad offrire agli studenti un laboratorio di comunicazione e collaborazione distribuito per portare a termine i progetti didattici in un contesto industriale. Si fonda su una condivisione di intenti e di sforzi operati congiuntamente dalla comunità universitaria coinvolta e IBM. Si basa sull utilizzo degli strumenti: IBM Rational Team Concert, Rational Quality Manager, Rational Requirements Composer, tutti basati sull architettura Jazz. Di seguito riportiamo una figura della organizzazione hardware del sistema (che allo stato della figura non vede tutte le università partecipanti). 131

142 Figura 1. Architettura hardware della piattaforma ETC Il progetto instrumenta principalmente le seguenti discipline e attività: Configurazione del software, Gestione del versionamento, Gestione delle build, Gestione generale dei sorgenti, Gestione delle attività, Assegnazione delle attività, Produzione di metriche e stati di avanzamento dei progetti, Introduzione alla nomenclatura Agile, Ideazione ed outline dei requisiti, Sketch di interfacce utente, Disegno di processi di business, Creazione di WBS, Creazione di piani di test, Creazione di casi di test, Esecuzioni di casi di test, Apertura e gestione dei difetti, Assegnazione dei difetti, 132

143 Collaborazione tra gli studenti attraverso l assegnazione delle attività e mediante chat integrata, Gestione degli skill, Visualizzazione tramite mashup e dashboard dello stato dei progetti, che sono rivolte principalmente ai ruoli di: progettisti, analisti di sistema e di business, sviluppatori, architetti software, tester, project manager, stakeholder responsabili di configurazione. Gli studenti possono impersonare i ruoli descritti utilizzando gli strumenti elencati e le relative discipline per scrivere il software che viene loro richiesto, per creare i requisiti da cui far partire lo sviluppo, programmando, redigendo piani di test, sperimentando un approccio alla scrittura del software iterativo e collaborativo, cooperando tra loro e soprattutto concentrandosi nella materia di studio in un contesto di collaborazione e responsabilità. Il nome industriale di tale approccio allo sviluppo è: Collaborative Application Lifecycle Management (C/ALM) L architettura Jazz su cui si basano gli strumenti utilizzati consente ai team di sviluppo software di beneficiare delle stesse caratteristiche positive indotte dall utilizzo di internet globalmente. Questo comporta una possibilità di collaborazione e comunicazione tra gli studenti non esterna al contesto universitario ma costruita, offerta ed incoraggiata dalle università stesse. L idea soggiacente è quella che gli studenti possano massimizzare tramite la mutua collaborazione e supporto il messaggio educativo offerto dalle università, in un ambito di grande modernità. 3.1 Strumenti e consulenza IBM Strumenti software Sono stati impiegati gli attuali prodotti commerciali IBM per la gestione del ciclo di sviluppo delle applicazioni, basate sulla piattaforma collaborativa Jazz: Rational Team Concert standard edition, Rational Quality Manager, Rational Requirement Composer, 133

144 Licenze concorrenti per oltre 300 utenti contemporanei, DB2 come database enterprise. Strumenti aggiuntivi IBM Rational RequisitePro, Rational Method Composer, Rational Software Architect, Rational Functional Tester. Hardware di supporto Il server scelto è una macchina blade IBM serie X quadriprocessore. Team a supporto Le seguenti persone - professionisti e dirigenti di IBM - sono coloro che hanno ideato l architettura, reperito le risorse, armonizzato l investimento, effettuato le installazioni ed il collaudo, lo start-up iniziale e le prime lezioni in aula: Massimo Caprinali Paolo Cravino Giorgio Galli Ferdinando Gorga Carla Milani Emilio Salierno. 4. Roadmap del progetto ETC Di seguito, nella tabella 2, elenchiamo le tappe significative della nascita del progetto e la roadmap di breve periodo: Tabella 2. ETC. Roadmap del progetto per il primo anno. Settembre 2009 Ottobre 2009 Dicembre 2009 Gennaio 2010 Febbraio 2010 Marzo 2010 Aprile 2010 nascita dell idea consolidamento idea e coinvolgimento del management IBM definizione dello scopo del progetto e definizione architettura reperimento server installazione software e collaudo piattaforma installazione server ed entrata in esercizio del sistema creazione utenze prime sperimentazioni e nascita dei progetti, costruzione del materiale didattico, allestimento del backup del server e costruzione di macchina virtuale lato studente, identificazione e coinvolgimento degli studenti champions, 134

145 Aprile 2010 Maggio Giugno 2010 Luglio- Settembre 2010 Dicembre 2010 seminari degli studenti champions presso altre sedi (Genova, Savona, Salerno, Napoli), partecipazioni a conference call su tema ETC da parte degli attori della sperimentazione, iscrizione dei partecipanti a, iscrizione all Academic Initiative. Insegnamento di due seminari descriventi gli scenari dell ingegneria del software in ambito industriale, tenuti all università Napoli a circa 150 studenti da parte di IBM autofertilizzazione dell iniziativa, adesione delle università, diffusione delle competenze ad opera dei champions e dei docenti. consolidamento dell utilizzo, ideazione delle best practices, costruzione di plug-in eclipse per il supporto al progetto ETC, feedback prime impressioni e suggerimenti degli studenti. ideazione e definizione delle metriche di processo e feedback di utilizzo, presentazione alla Eclipse Conference IT analisi feedback e lessons learned, preparazione al rollout su altri progetti/altre università. allestimento progetto demo di dimostrazione dell iniziativa. 5. Il ruolo dell Academic Initiative IBM Da quando è nata, l Academic Initiative si è posta l obiettivo di fornire strumenti e conoscenza alla comunità didattica universitaria internazionale. Mentre però fino ad ora erano i docenti ad approvviggionarsi della tecnologia necessaria da IBM ed utilizzarla più o meno autonomamente nei loro insegnamenti, con l esperienza che sta maturando nel prosieguo del progetto ETC, l Academic Initiative sta assumendo un ruolo di maggior centralità e di coordinamento, nonché di volano, nel contesto delle università cooperanti. Data la mole di artifact prodotti ed in via di produzione e la mole di ore-uomo che sui sistemi sperimentati si stanno raccogliendo, il progetto ETC rappresenta anche per IBM una nuova fonte di studio e di informazioni su come eventualmente migliorare i prodotti, il loro posizionamento, la loro estendibilità, e sta incrementando la quantità delle best practices inerenti gli stessi strumenti. L incontro dei due mondi universitario ed industriale quindi sta producendo quei risultati di mutua utilità che erano stati auspicati e ricercati durante le prime riunioni. Queste evenienze fanno capire meglio come questo progetto stia oltrepassando i confini nel quale gli autori ne avevamo sancito la nascita e stia diventando un laboratorio sperimentale dove far nascere non soltanto nuove idee di miglioramento degli strumenti, ma anche nuovi pattern di comportamento, di studio, di insegnamento (ad esempio il citato call for project intrauniversitario nei progettini didattici). 135

146 La sinergia tra le energie positive veicolate dall entusiasmo nato intorno alle idee fondatrici e quelle investite da IBM in termini di materiale, disponibilità, supporto e linee guida ha prodotto un terreno di coltura (e cultura!) i cui frutti saranno raccolti per anni. Con l adesione di nuove università e con il costante arricchimento degli strumenti da parte dei laboratori IBM potranno essere accolte nuove idee e la sperimentazione potrà estendersi anche in direzioni nuove ed inaspettate. Figura 2. Academic Initiative di IBM 6. Conclusioni, primi risultati e sviluppi futuri Il progetto ETC è una grande novità nel panorama didattico universitario Italiano e non solo. Il numero di progetti didattici attivi al momento è pari a 11, di cui 6 sono intrauniversitari, 4 locali e uno di supporto all iniziativa ETC stessa. In tali progetti sono impiegati studenti di diverse università ed appartenenti a corsi di studio diversi per cui si riesce ad implementare una sorta di knowledge network su tematiche comuni e su piattaforma unica il che costituisce una lingua franca tecnologica che determina un livello di interazione alto per gli studenti ed è un interessante esperimento di e-learning non solo per gli studenti ma anche per i docenti. Non si è ancora infatti sottolineato che per la prima volta docenti delle stesse materie si trovano a cooperare in un campo comune nel definire, forse per la prima volta in questo modo, risorse didattiche di interesse generale. 136

147 Con ETC IBM ha inteso offrire senza risparmio di mezzi alle università e alla popolazione studentesca scientifica il migliore supporto verso una formazione orientata ai metodi industriali, efficace, snella, facile e moderna. Le implicazioni di ETC e dell utilizzo di Jazz congiuntamente nelle università sono fino ad ora solo parzialmente esplorate, essendo il progetto un incubatore di nuove idee ed una piattaforma di collaborazione dedicata alle migliori intelligenze italiane. L intenzione è quella di studiare molti fenomeni sia interni alla tecnologia che a latere di essa. In altre parole non ci si vuole limitare a fornire feedback di uso e di aggiunta di funzionalità alla piattaforma cosa che IBM comunque fa con propri consulenti e laboratori. Si vuole invece studiare l'impatto di software professionale su studenti di livello diverso universitario ed ancora lavorare sulle reti di conoscenza applicate a comunità di pratica cercando anche di capire qual è il materiale didattico che dobbiamo imparare a costruire per corsi i cui studenti si trovano immersi in una rete di conoscenza così dinamicamente configurabile. Ancora, si vuole capire come le persone cooperano, sviluppando nuove idee in una logica di open innovation network, non escudendo i docenti stessi che sono attori, non più centrali, di questo mondo. Il risultato di questa mole di utilizzo è da giudicare molto positivamente, considerando che il server è stato installato soltanto a Marzo E nostra speranza che questa esperienza contribuisca a formare una futura classe di professionisti in grado di gareggiare ad armi pari, se non superiori, nella difficile lotta per il successo e per la prosperità nel mondo del lavoro nazionale ed internazionale. References 1. Johnson D. W., Johnson R. T., E. Holubec, E.: Apprendimento cooperativo in classe Erickson (1994). 2. IBM: 3. Eclipse Italian community Rich S.: The Modern Platform for Software Engineering Tools, in Gargantini A. (eds) Proc. of the 4th Workshop of Eclipse Italian community, Bergamo, Italy, 31-38, ISBN (2009). 5. Rational software tools. Available at 6. Eclipse foundation IBM academic initiative IBM and InfoQ: Scaling Agile with C/ALM (Collaborative Application Lifecycle Management) - Carolyn Pampino, Erich Gamma, and John Wiegand. 9. Tecnologia Jazz:

148 Student Papers Chair: Gianni Vercelli Dipartimento di Informatica, Sistemistica e Telematica, Università di Genova 138

149 An Eclipse Plug-in for Code Search using Full-text Information Retrieval Engine Andrejs Jermakovics, Francesco Di Cerbo Free University of Bolzano/Bozen Bolzano/Bozen, Italy {Andrejs.Jermakovics, Abstract. Search is becoming an important aspect of software development motivated by growing code sizes and open-source availability. We present InstaSearch, an Eclipse plug-in for flexible and high-performing code search using the Lucene information retrieval library. We describe the provided search capabilities and the architecture behind the plug-in. 1 Introduction The architecture of Eclipse IDE comprises a number of external tools, that are used in order to provide seamlessly integrated functionalities to the user. Among them, Lucene [1], a performant information retrieval software library. It is suitable for applications that need full text indexing and searching capability. Lucene has been used in the implementation of Internet search engines [3] and local searching. At the core of Lucene's logical architecture is the idea of a document containing fields of text. This flexibility allows Lucene's API to be independent of the file format. Text from PDFs, HTML, Microsoft Word, and OpenDocument documents, as well as many others can all be indexed so long as their textual information can be extracted. Lucene is also part of Eclipse architecture. Currently (Helios release), the Eclipse Help component exploits capabilities of the Lucene search engine, allowing indexing of token streams (streams of words). Analyzers create tokens from the character stream. They examine text content and provide tokens for use with the index. The text stream can be tokenized in many unique ways. A trivial analyzer can tokenize streams at white space, a different one can perform filtering of tokens, based on the application needs. Since the documentation is mostly human-readable text, it is desired that analyzers used by the help system perform language and grammar aware tokenization and normalization of indexed text. For some languages, the quality of search increases significantly if stop word removal and stemming is performed on the indexed text. The idea at the base of InstaSearch [2] is to use the Lucene components already embedded in Eclipse, in order to add efficiency and effectiveness in Eclipse searches. A particular infrastructure is needed to allow Lucene to index files contained in the 139

150 active workspace. InstaSearch provides this infrastructure, together with an effective Eclipse view to control the plug-in features. In the paper, the main features and architecture of InstaSearch will be illustrated. 2 Search Features The goal of InstaSearch is to provide powerful and flexible code search with high performance. It offers flexibility through Lucene query syntax and the fields that it defines for files. Example search queries include: Wildcard searches to search using a substring o app* initialize Excluding words o application -initialize Fuzzy searches to find similar matches o application init~ Limiting by extension, project or working set o proj:myproject ext:java,xml application init o ws:myworkingset application init Advanced queries o index AND (directory OR dir) It should be noted that working set (ws) is a virtual field, meaning that it is not stored in the index and is actually converted to a list of projects before search by the plug-in. Fig. 1. Search using InstaSearch 140

151 3 Architecture InstaSearch uses Lucene as a library and defines multiple sub-components for indexing and searching. The overall architecture is shown in Fig. 2. Analyzer. The Analyzer reads files from the workspace and splits the text into a set of tokens. It splits text into words, removes stopwords and then splits words by camel case notation and underscore character, because these are used for identifiers in code. Both, the original word and its split parts are indexed thus allowing to search for parts of identifiers (e.g. finding the word "List" in "ArrayList") as well as searching for the exact identifier. Fig. 2. Overall architecture Indexer. The Indexer collects files with their tokens and writes them to Lucene index. The meta-data associated with each file is specified using several fields which can later be used in a Lucene search query to narrow down the results (e.g. files in a certain project). Table 1. InstaSearch defined fields Field file name ext proj jar contents ws Description Full path of the file Name of the file Extension of the file Name of the project containing the file Name of the jar if file is stored in a jar Contents of the file (default search field) Working set containing projects (virtual field) 141

152 Special care is taken of JAR files that have source code attachments. The Eclipse Java Development Tools (JDT) API allows retrieving the source code and hence it is indexed, however the associated file is the compiled.class file (IClassFile interface). The result is that, when the.class file is opened in an editor, JDT recognizes the Java type(s) and allows further Java related interaction (e.g. viewing the type hierarchy). The plug-in adds a resource change listener to the Eclipse workspace for collecting files that are changed by the user. Periodically these changed files are re-indexed to keep the index up to date however this operation is only performed when search is not in use. Lucene merge factor is set low to reduce the amount of resources used and the task is run as a background system task in Eclipse. Query Analyzer. The Query Analyzer parses the search text entered by the user and creates a search query which is used to retrieve the files from the index. Multiple operations are performed on the search query before running it, such as expanding the ws field to a list of projects or replacing. in proj:. with the current active project. InstaSearch View. The InstaSearch view performs all UI interactions for getting the search text and displaying a list of matching files. The search is performed while the search text is being typed thus allows the user to tune it quickly for more relevant results. For each displayed file, the view computes the number of matched terms. It retrieves the terms from the Search Query and then sums the term occurrence counts from the Term-Frequency Vector which is available from the Lucene index. When user requests a preview of a search result file the view finds top most relevant lines. Each line consists of terms and Lucene gives weights to terms that appear in the query. A line is considered more relevant if the total weight of its terms is greater than that of another line. 4 Conclusions InstaSearch is a plug-in that provides effective full index searches to Eclipse and currently gives instant search results for workspaces with tens of thousands of files. It relies on Lucene, a powerful engine embedded per default in Eclipse, but mainly used for enriching the Help functionalities. The architecture of InstaSearch creates the set of component and resources needed for indexing workspace files with Lucene, as well as a convenient Eclipse view as user interface. InstaSearch is hosted on Code.Inf (, an initiative from Computer Science faculty at Free University of Bolzano-Bozen, aiming to foster Free/Open Source initiatives that come from research and didactic activities. References 1. Apache Lucene, 2. InstaSearch Eclipse plug-in, 3. Perner, Petra (2007). Machine Learning and Data Mining in Pattern Recognition: 5th International Conference. Springer. p ISBN

153 Collaborative GeoGebra Emidio Bianco, Ilaria Manno, and Donato Pirozzi ISISLab, Dipartimento di Informatica ed Applicazioni R.M. Capocelli Università di Salerno, Fisciano (SA), Italy Abstract. In this paper we illustrate our work to make collaborative GeoGebra, a software supporting the learning of mathematics, algebra and geometry. The introduction of collaborative features in GeoGebra is founded on CoFFEE, an Eclipse based environment designed to support collaborative learning. 1 Introduction The work presented in this paper aims to create a tool to support the collaborative learning of mathematics, algebra and geometry. To achieve this result, we have used two existing systems: GeoGebra [1] and CoFFEE [2 4]. The idea presented in this paper is to introduce collaborative functionalities in GeoGebra and integrate this tool in CoFFEE. GeoGebra is a dynamic mathematics software for schools that joins geometry, algebra, and calculus. It allows to construct points, vectors, segments, lines and conic sections as well as functions and allows also to change them dynamically. GeoGebra s user interface (see Fig. 1) consists of a Graphic Window and an Algebra Window. On the one hand you can operate the provided geometry tools with the mouse in order to create geometric constructions on the drawing pad of the Graphic Window. On the other hand, you can directly enter algebraic input, commands, and functions into the input field by using the keyboard. While the graphical representation of all objects is displayed in the Graphic Window, their algebraic numeric representation is shown in the Algebra Window. These two views are characteristic of GeoGebra: an expression in the algebra window corresponds to an object in the geometry window and vice versa. GeoGebra is implemented in Java and is distributed both as standalone application and as applet Java to use in the browser. Both the distribution of GeoGebra provides APIs to interact with the applications. CoFFEE (Cooperative Face-to-Face Educational Environment) is a suite of applications designed to support the collaborative learning in classroom. The main applications are the CoFFEE Controller and the CoFFEE Discusser which are used by the teacher and the learners during the collaborative session. CoFFEE provides a set of collaborative tools able to offer a wide set of collaborative functionalities. The Threaded Discussion tool provides the learners with a textual debating space with a tree structure. The Graphical Discussion tool provides a shared graphic space where the learners can add their contributions as movable box (similar to post-it) or links among boxes. The Positionometer is a voting system that allows the teacher to gather the learners positions around a question. The CoFFEE applications are implemented as Rich Client 143

154 Fig. 1. The user interface of GeoGebra Applications and the collaborative tools are implemented as Eclipse-based plug-ins. The integration of the tools on the CoFFEE applications happens through an extension point defined on the applications, which is extended by the tools. This mechanism derives from the the Eclipse architecture and allows to integrate new tools without modify the existing applications. 2 Collaborative GeoGebra The aim of this work is to share the same GeoGebra space among multiple users allowing synchronous collaboration: they can add, modify, delete objects at the same time and each user can see the changes in real time. To make GeoGebra collaborative, we leverage on CoFFEE and we define Collaborative GeoGebra as a CoFFEE tool. A screenshot of the CoFFEE Controller with Collaborative GeoGebra and the Chat tool provided by CoFFEE is shown in Fig. 2. On the left, the Control Panel view provided by CoFFEE shows the participant students and provides the teacher with control functionalities (such as freeze of students, groups managements, etc.). The central view shows Collaborative GeoGebra: each participant user can see the same content in his own GeoGebra view and can add new objects or interact with existing objects. The detached view is the Chat tool provided by CoFFEE: the integration of GeoGebra in CoFFEE allows to use multiple CoFFEE tools to support different collaborative tasks. The integration of GeoGebra in CoFFEE involves two phases: in the first phase we integrate GeoGebra as it is in a CoFFEE tool and then in the second phase we introduce in GeoGebra collaboration functionalities. Therefore, we have designed a CoFFEE tool with server and client sides (implemented as plug-ins), which extend the CoFFEE Controller and Discusser. Each side (client and server) of the CoFFEE tool implements the following classes: 144

155 Fig. 2. Collaborative GeoGebra in CoFFEE Controller. a class to manage the life cycle of the plug-in; it is used by CoFFEE (and the underlying Rich Client Platform) to start the plug-in; this class extends the default implementation provided by CoFFEE; a class to manage the communication functionalities; it provides methods to exchanges messages among clients and server; this class extends the default implementation provided by CoFFEE; a class to manage the GUI of the plug-in; The class to manage the GUI is where we integrate GeoGebra in the CoFFEE tool: we use a widget Browser (org.eclipse.swt.browser.browser) to execute the GeoGebra applet. This completes the first phase to integrate GeoGebra in CoFFEE, then we have to introduce collaboration functionalities in GeoGebra. GeoGebra provides two kind of API: methods to get and set the GeoGebra objects state and methods to register JavaScript functions as listener. Therefore, we have defined our listeners as JavaScript functions and we have added them to GeoGebra. Our listeners are notified for each event which is triggered by the user on GeoGebra: creation of objects, moving, deletion, changing of properties (color, shape, etc). For each event the appropriate listener creates a message containing all the appropriate information and sends the message to the server; it receives the message and executes the event by using the methods provided by GeoGebra to manipulate its objects, then the server sends the messages to all the clients; they on the reception of the message, execute the event by using the methods provided by GeoGebra to manipulate its objects. Architecture. The client-server architecture allows us to centralize the synchronization of events on the server, which manage them in order of arrival. This approach has required to implement a specific behavior for creation events. The problem related to this kind of events is that the listener on the client which generated the creation event, sends a message to the server after that the object has been created, and this would break the guarantee of synchronization on the server. To avoid this problem the listener deletes the object just created and repeats the creation when the creation event is received from the server. 145

156 Work in progress. We have to report a problem that we are facing with: in specific circumstances and under heavy load, the GeoGebra method which we use on the clients to create locally the event received by the server, throws a ConcurrentModificationException. Since the exception is within the applet, we believe that it could depend on the use of a data structure which was not designed to be accessed by several threads. We are already in contact with the GeoGebra team to solve this problem. 3 Conclusions The Eclipse-based nature of CoFFEE has allowed us to integrate GeoGebra as Eclipsebased plug-ins and inherit all the functionalities provided by CoFFEE, such as the communication functionalities, the latecomer management, the integration with other tools. A remarkable consideration concern the licenses: GeoGebra is provided with GPL license while CoFFEE is provided with EPL license, and these license are explicitly incompatible. The GPL requires that any distributed work that contains or is derived from the [GPL-licensed] Program be licensed as a whole under the terms of the GPL, and that the distributor not impose any further restrictions on the recipients exercise of the rights granted. The EPL, however, requires that anyone distributing the work grant every recipient a license to any patents that they might hold that cover the modifications they have made. Because this is a further restriction on the recipients, distribution of such a combined work does not satisfy the GPL 1. Indeed, we have not modified the source code nor we distribute GeoGebra: we use the applet available on the GeoGebra site. This solves the incompatibility of the licenses and at the same time guarantees that we can use the updated versions of GeoGebra. A different approach is used by Stahl et al. [5], that introduce GeoGebra in a collaborative environment named Virtual Math Teams (VMT), distributed with GPL license. They re-distribute GeoGebra and made changes on the source code, while we use GeoGebra as an applet and we do no modifies on the original system. References 1. Geogebra. 2. De Chiara, R., Di Matteo, A., Ilaria Manno, Scarano, V.: CoFFEE: Cooperative Face2Face Educational Environment. In: Proceedings of the 3rd International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2007), November 12-15, 2007, New York, USA. (2007) 3. De Chiara, R., Manno, I., Scarano, V.: CoFFEE: an Expandable and Rich Platform for Computer-Mediated, Face-to-Face Argumentation in Classroom. In: In Educational Technologies for Teaching Argumentation Skills. Bentham ebooks (in press) 4. CoFFEE at Sourceforge. (2010) 5. Stahl, G., Xiantong Ou, J., Cakir, M., Weimar, S., Goggins, S.: Multi-user support for virtual geogebra teams. In: Proceedings of First North American GeoGebra Conference (GeoGebra NA2010): Research in Visual Mathematics and Cyberlearning. (July 2010) 1 It must be said that the incompatibility of EPL and GPL is causing the two ecosystems to develop independently, mutually absorbing resources and innovation from each other. If the future would bring a EPL-compatible GPL or a GPL-compatible EPL, that would strongly and positively influence both fields. 146

157 Enforcing Team Cooperation project: student opinion Diego Brondo 1, Lidia Stanganelli 1 1 Università degli Studi di Genova - DIST Viale Causa 13, 16145, Genova, Italy Abstract. ETC (Enforcing Team Cooperation) è un progetto che vede coinvolte diverse università italiane insieme a IBM Rational e IBM Academic Initiative Italy nonché la comunità italiana di Eclipse. Tutti gli attori coinvolti utilizzano il software del brand IBM Rational in particolare un software collaborativo RTC per la realizzazione di progetti software. Tutti gli attori partecipano, attraverso l uso di tool quali RTC (Rational Team Concert), RRC (Rational Requirements Composer) e RQM (Rational Quality Manager). Mediante l'uso di RTC gli studenti collaborano alla realizzazione dei progetti software loro assegnati; in RRC vengono messi a disposizione strumenti UML per l analisi e la specifica dei requisiti; infine in RQM vengono messi a disposizione gli strumenti per il testing e per tutto quello che riguarda la qualità del software. Il progetto è partito a marzo 2010 e coinvolge diversi studenti di molte facoltà su differenti progetti. In questo articolo si vuole raccogliere le opinioni, le osservazioni, le difficoltà incontrate nell utilizzo della piattaforma. Quindi oltre ai risultati ottenuti in particolare si vuole qui evidenziare le difficoltà che gli studenti dell Università di Genova hanno incontrato focalizzando l'analisi sull'aspetto comunicativo della piattaforma di sviluppo e quindi nell uso di RTC. Keywords: Cooperative learning, Software engineering, 1 Introduzione Il progetto ETC [1] (Enfocing Team Cooperation) vede coinvolti diversi attori quali università italiane (Federico II di Napoli, Bicocca Milano, Bergamo, Bologna, Genova, Bari-Taranto, Salerno), IBM Academic Initiative Italy [6], IBM Rational e la comunità italiana di Eclipse [3,5]. Lo scopo è quello di creare una collaborazione tra studenti di diverse università e appartenenti a corsi d'informatica e non, mediante l'utilizzo di appositi software del brand Rational di IBM innovativi. I tool utilizzati nell'ambito del progetto sono: RTC (Rational Team Concert), RRC (Rational Requirements Composer) e RQM (Rational Quality Manager). Il primo permette la collaborazione tra gli attori coinvolti, mettendo a disposizione strumenti come la chat per la comunicazione sincrona; il secondo viene utilizzato per la 147

158 specifica dei requisiti e mette a disposizione strumenti per la costruzione del software a partire dalle sue specifiche come UML; il terzo, infine, mette a disposizione tutti gli strumenti necessari per il testing. Tutto ciò dà agli studenti la possibilità di utilizzare tecnologie di progettazione, sviluppo, testing e controllo di qualità all avanguardia. Ogni università ha realizzato progetti in collaborazione con le altre università e propri progetti all'interno dei corsi. 2 I lavori dei team Ogni università ha creato uno o più progetti ai quali partecipano più studenti, un champion student (che è uno studente laureando o un dottorando di ricerca) e il professore di riferimento. In figura 1 viene rappresentata una snapshot di un progetto costruito con RTC Figure 1. RTC pagina del progetto Portale SdC Ogni progetto ha un titolo, una descrizione e dei membri che sono i partecipanti, una data di inizio e una data di fine entro la quale bisogna completarlo. Le esercitazioni organizzate dall'università di Genova hanno coinvolto gruppi di studenti della facoltà di Scienze della Comunicazione per il corso di Laboratorio di Web Design. Tali esercitazioni prevedevano proprio l'uso di persone le cui competenze non fossero prettamente informatiche, ma che avessero un punto di vista fortemente incentrato sull'aspetto comunicativo, infatti lo scopo era andare testare le capacità di comunicazione e interazione fra i team member all'interno della piattaforma RTC [2,4]. Il background degli studenti coinvolti è comunque fortemente indirizzato verso un uso delle tecnologie informatiche nella comunicazione, per cui la curva di apprendimento all'uso degli strumenti utilizzati, anche se destinati ad utenti esperti nel campo dello sviluppo software, non è stata ripida. 148

Architectural patterns

Architectural patterns Open Learning Universiteit Unit 3 Learning Unit 3 Architectural patterns Contents Introduction............................................... 35 3.1 Patterns..............................................

More information


NEER ENGI ENHANCING FORMAL MODEL- LING TOOL SUPPORT WITH INCREASED AUTOMATION. Electrical and Computer Engineering Technical Report ECE-TR-4 NEER ENGI ENHANCING FORMAL MODEL- LING TOOL SUPPORT WITH INCREASED AUTOMATION Electrical and Computer Engineering Technical Report ECE-TR-4 DATA SHEET Title: Enhancing Formal Modelling Tool Support with

More information

Configuration Management Models in Commercial Environments

Configuration Management Models in Commercial Environments Technical Report CMU/SEI-91-TR-7 ESD-9-TR-7 Configuration Management Models in Commercial Environments Peter H. Feiler March 1991 Technical Report CMU/SEI-91-TR-7 ESD-91-TR-7 March 1991 Configuration Management

More information

JCR or RDBMS why, when, how?

JCR or RDBMS why, when, how? JCR or RDBMS why, when, how? Bertil Chapuis 12/31/2008 Creative Commons Attribution 2.5 Switzerland License This paper compares java content repositories (JCR) and relational database management systems

More information

Openness and Requirements: Opportunities and Tradeoffs in Software Ecosystems

Openness and Requirements: Opportunities and Tradeoffs in Software Ecosystems Openness and Requirements: Opportunities and Tradeoffs in Software Ecosystems Eric Knauss Department of Computer Science and Engineering Chalmers University of Gothenburg, Sweden

More information

A Template for Documenting Software and Firmware Architectures

A Template for Documenting Software and Firmware Architectures A Template for Documenting Software and Firmware Architectures Version 1.3, 15-Mar-00 Michael A. Ogush, Derek Coleman, Dorothea Beringer Hewlett-Packard Product Generation Solutions

More information

4everedit Team-Based Process Documentation Management *

4everedit Team-Based Process Documentation Management * 4everedit Team-Based Process Documentation Management * Michael Meisinger Institut für Informatik Technische Universität München Boltzmannstr. 3 D-85748 Garching Andreas Rausch Fachbereich

More information

Final Report. DFN-Project GRIDWELTEN: User Requirements and Environments for GRID-Computing

Final Report. DFN-Project GRIDWELTEN: User Requirements and Environments for GRID-Computing Final Report DFN-Project GRIDWELTEN: User Requirements and Environments for GRID-Computing 5/30/2003 Peggy Lindner 1, Thomas Beisel 1, Michael M. Resch 1, Toshiyuki Imamura 2, Roger Menday 3, Philipp Wieder

More information

An SLA-based Broker for Cloud Infrastructures

An SLA-based Broker for Cloud Infrastructures Journal of Grid Computing manuscript No. (will be inserted by the editor) An SLA-based Broker for Cloud Infrastructures Antonio Cuomo Giuseppe Di Modica Salvatore Distefano Antonio Puliafito Massimiliano

More information

Analysis, Design and Implementation of a Helpdesk Management System

Analysis, Design and Implementation of a Helpdesk Management System Analysis, Design and Implementation of a Helpdesk Management System Mark Knight Information Systems (Industry) Session 2004/2005 The candidate confirms that the work submitted is their own and the appropriate

More information

The Greenfoot Programming Environment

The Greenfoot Programming Environment The Greenfoot Programming Environment MICHAEL KÖLLING University of Kent Greenfoot is an educational integrated development environment aimed at learning and teaching programming. It is aimed at a target

More information

A Road to a Formally Verified General-Purpose Operating System

A Road to a Formally Verified General-Purpose Operating System A Road to a Formally Verified General-Purpose Operating System Martin Děcký Department of Distributed and Dependable Systems Faculty of Mathematics and Physics, Charles University Malostranské náměstí

More information

SOA Development and Service Identification. A Case Study on Method Use, Context and Success Factors

SOA Development and Service Identification. A Case Study on Method Use, Context and Success Factors Frankfurt School Working Paper Series No. 189 SOA Development and Service Identification A Case Study on Method Use, Context and Success Factors by René Börner, Matthias Goeken and Fethi Rabhi April 2012

More information

Introduction to Recommender Systems Handbook

Introduction to Recommender Systems Handbook Chapter 1 Introduction to Recommender Systems Handbook Francesco Ricci, Lior Rokach and Bracha Shapira Abstract Recommender Systems (RSs) are software tools and techniques providing suggestions for items

More information

Lehrstuhl für Datenbanksysteme Fakultät für Informatik Technische Universität München

Lehrstuhl für Datenbanksysteme Fakultät für Informatik Technische Universität München Lehrstuhl für Datenbanksysteme Fakultät für Informatik Technische Universität München Metadata Management and Context-based Personalization in Distributed Information Systems Dipl.-Inf. Univ. Markus Keidl

More information

Execute This! Analyzing Unsafe and Malicious Dynamic Code Loading in Android Applications

Execute This! Analyzing Unsafe and Malicious Dynamic Code Loading in Android Applications Execute This! Analyzing Unsafe and Malicious Dynamic Code Loading in Android Applications Sebastian Poeplau, Yanick Fratantonio, Antonio Bianchi, Christopher Kruegel, Giovanni Vigna UC Santa Barbara Santa

More information


TASK-BASED USER INTERFACE DESIGN. Martijn van Welie TASK-BASED USER INTERFACE DESIGN Martijn van Welie SIKS Dissertation Series No. 2001-6. The research reported in this thesis has been carried out under the auspices of SIKS, the Dutch Graduate School for

More information

Best Practices for Accessible Flash Design. by Bob Regan

Best Practices for Accessible Flash Design. by Bob Regan by Bob Regan August 2005 Copyright 2004 Macromedia, Inc. All rights reserved. The information contained in this document represents the current view of Macromedia on the issue discussed as of the date

More information

Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie

Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie Wydział Elektrotechniki, Automatyki, Informatyki i Elektroniki KATEDRA INFORMATYKI PRACA MAGISTERSKA PRZEMYSŁAW DADEL, MARIUSZ BALAWAJDER ANALIZA

More information

Development of a 3D tool for visualization of different software artifacts and their relationships. David Montaño Ramírez

Development of a 3D tool for visualization of different software artifacts and their relationships. David Montaño Ramírez Development of a 3D tool for visualization of different software artifacts and their relationships David Montaño Ramírez Development of a 3D tool for visualization of different software artifacts and their

More information

Perspectives and challenges in e-learning: towards natural interaction paradigms

Perspectives and challenges in e-learning: towards natural interaction paradigms Journal of Visual Languages and Computing 15 (2004) 333 345 Journal of Visual Languages & Computing Perspectives and challenges in e-learning: towards natural interaction paradigms

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Politecnico di Torino Porto Institutional Repository [Proceeding] An Overview of Software-based Support Tools for ISO 26262 Original Citation: Makartetskiy D., Pozza D., Sisto R. (2010). An Overview of

More information

Service Composition in Open Agent Societies

Service Composition in Open Agent Societies Service Composition in Open Agent Societies 1 Service Composition in Open Agent Societies Agostino Poggi, Paola Turci, Michele Tomaiuolo Abstract Agentcities is a network of FIPA compliant agent platforms

More information

Semantic Search in Portals using Ontologies

Semantic Search in Portals using Ontologies Semantic Search in Portals using Ontologies Wallace Anacleto Pinheiro Ana Maria de C. Moura Military Institute of Engineering - IME/RJ Department of Computer Engineering - Rio de Janeiro - Brazil [awallace,anamoura]

More information

Data Abstraction and Hierarchy

Data Abstraction and Hierarchy Data Abstraction and Hierarchy * This research was supported by the NEC Professorship of Software Science and Engineering. Barbara Liskov Affiliation: MIT Laboratory for Computer Science Cambridge, MA,

More information

GEF Web Style Guide. Final Report. Contract No. 0903/02 Southeast Asia START Regional Center. 16 September 2009

GEF Web Style Guide. Final Report. Contract No. 0903/02 Southeast Asia START Regional Center. 16 September 2009 GEF Web Style Guide Final Report Contract No. 0903/02 Southeast Asia START Regional Center 16 September 2009 Prepared by David W. Moody & Jeanne C. Moody Beaver Wood Associates Alstead, New Hampshire,

More information

Introduction to SOA with Web Services

Introduction to SOA with Web Services Chapter 1 Introduction to SOA with Web Services Complexity is a fact of life in information technology (IT). Dealing with the complexity while building new applications, replacing existing applications,

More information

emagazine E-reader version

emagazine E-reader version emagazine E-reader version Contents DIWUG SharePoint emagazine Colofon Nr. 12, February 2014 Publisher: Stichting Dutch Information Worker User Group (DIWUG) Editors: Marianne van Wanrooij

More information

Mobile Computing Middleware

Mobile Computing Middleware Mobile Computing Middleware Cecilia Mascolo, Licia Capra and Wolfgang Emmerich Dept. of Computer Science University College London Gower Street, London, WC1E 6BT, UK {C.Mascolo L.Capra W.Emmerich}

More information

Example-driven meta-model development

Example-driven meta-model development Software and Systems Modeling manuscript No. (will be inserted by the editor) Example-driven meta-model development Jesús J. López-Fernández, Jesús Sánchez Cuadrado, Esther Guerra, Juan de Lara Universidad

More information