Development of a Tablet-GUI

Size: px
Start display at page:

Download "Development of a Tablet-GUI"


1 Autonomous Systems Lab Prof. Roland Siegwart Bachelor-Thesis Development of a Tablet-GUI Implementation of rosjava in an Android App Spring Term 2013 Supervised by: Dr. Paul Furgale Dr. Ralf Ka stner Author: Jonas Eichenberger


3 Declaration of Originality I hereby declare that the written work I have submitted entitled Development of a Tablet-GUI for an Inspection Robot is original work which I alone have authored and which is written in my own words. Author(s) Jonas Eichenberger Supervising lecturer Paul Ralf Furgale Kästner With the signature I declare that I have been informed regarding normal academic citation rules and that I have read and understood the information on Citation etiquette ( The citation conventions usual to the discipline in question here have been respected. The above written work may be tested electronically for plagiarism. Place and date Signature

4 Abstract Due to a hazard or unreachable environment teleoperated robots are used for inspections. These robots are controlled and supervised through a User Interface by a human operator. A User Interface contains displays for sensor data and videostreams as well as input-devices like joysticks. Computer tablets are lightweight, handy, cheap and have a touch-sensitive surface and different sensors for inputs combined with much computational power. With these features, a tablet has the possibility to combine a complex User Interface in one small device. Rosjava, a pure Java implementation of ROS, enables to use an Android app with the Robot Operating System (ROS). A Graphical User Interface (GUI) for the Ship Inspection Robot (SIR) was developed as an Android application on a computer tablet. The GUI in the Android app combines the existing computer GUI and control device, a gamepad, of SIR in one device. The GUI app implements the video-stream, different sensor values and information about SIR s status. With two virtual joysticks an operator can control the robot and mounted camera with the tablet GUI. With one central main window, changeable with tabs, the layout of the GUI keeps the cognitive load for the user as small as possible and sets the focus on the main information. Moreover the layout considers ergonomic aspects. With the implementation of rosjava the Android GUI app uses the existing ROS interface of SIR, which needs no additional adaptations. Keywords: Graphical user interface (GUI), human robot interaction (HRI), teleoperation, Android app, ROS, rosjava, Ship Inspection Robot (SIR) ii

5 Acronyms and Abbreviations ETH ZhdK ASL HRI UAV IMU DOF SA GUI SIR ROS IDE API IP URI Eidgenössische Technische Hochschule Zurich University of the Arts Autonomous System Lab Human Robot Interaction Unmanned Aerial Vehicle Inertial Measurement Unit Degree Of Freedom Situation Awareness Graphical User Interface Ship Inspection Robot Robot Operating System Integrated Development Environment Application Programming Interface Internet Protocol Uniform Resource Identifier iii


7 Preface During this Bachelor thesis I learned a lot and was able to develop a Graphical User Interface on an Android tablet. This all was just possible with the support from my supervisors and other helpful people, and the resources I got from the ASL. I like to thank all people who were responsible for my learning experience. A special thank goes to: Dr Paul Furgale, Supervisor Dr Ralf Kästner, Supervisor Prof Dr Roland Siegwart, Head of the ASL Stefan Bertschi Thank you very much for all your patience and support. Jonas Eichenberger v


9 Contents Abstract Acronyms and Abbreviations Preface ii iii v 1 Introduction Motivation Goal of this Thesis Context Structure of this Report Ship Inspection Robot Focus Project Ship Inspection Robot Task of SIR Setup of SIR ROS Structure Related Work Telerobotics Overview about Telerobotics User Interface Situation Awareness Overview about User Interface User Interface Design Implementation Basic Tools Rosjava Android Computer Tablet: Nexus Layout of the Graphical User Interface Basic Layout of the GUI Elements in the Graphical User Interface Organisation of the Application Project Communication Roscore - Android App Realisation of the App Event Handling Virtual Joystick D Robot Model Video Picture Dynamic Plots vii

10 4.5.6 Light Control Fillbar for Infrared Sensors etc Evaluation of the Application Evaluation of the GUI Implementation of the GUI Evaluation Evaluation of the Results about the GUI Navigation with the Robot Implementation of the Navigation Comparison Evaluation of the Navigation Comparison Conclusion Achievements of the Current Design Limitations of the Current Design Improvements for the Current Design Future Work Possibilities of Tablets for Inspection Tasks Tables List of Figures List of Tables Listings Bibliography A Appendix 54 A.1 Technical Specifications Nexus A.2 Discarded Layouts A.3 Source Code of the Layout A.4 Source Code of GUI Elements and ROS Nodes A.5 Questionnaire for Evaluation A.6 Results of the GUI Evaluation

11 Chapter 1 Introduction The objective of this Bachelor thesis is to design and implement a Graphical User Interface (GUI) on an Android tablet for the robot SIR. Ship Inspection Robot (SIR) is a visual inspection tool for the maritime sector and uses ROS (cf. Chap. 2). 1.1 Motivation For inspection tasks like the inspection of a nuclear power plant or the inspection underwater teleoperated robots are often used. Teleoperated robots are used because the inspection environment is too dangerous for a human or too small or just because a robot is more efficient. Nevertheless the inspection robots need a human supervisor or an operator, because the inspection task is very complicated and the knowledge of a human is necessary. The interaction between the robot and the operator happens through a User Interface. A User Interface for teleoperation may look like Fig The User Interface consists of screens for the displays for video-feeds and sensor information and input device to control actuators on the robot. Tablet computers, called tablets, are lightweight and handy, therefore they are optimal to carry about. Today, tablets combine a lot of computational power in a small size. They are able to display all kind of information on their screen and have the possibility to gather inputs with their touch sensitive surface. Additional to the touch sensitive surface there are different sensors like accelerometers or GPS mounted in the tablet, which give more input possibilities. A tablet combines in an affordable device everything needed for a User Interface with multiple output and input possibilities. The Robot Operating System (ROS) 2 is a common used tool to develop robot applications. Therefore ROS is as well used for teleoperated robots. With rosjava a pure Java implementation of ROS, their is the relatively new possibility to use ROS in an Android application and therefore on a tablet computer or 1 from [5], p see 1

12 Chapter 1. Introduction 2 Figure 1.1: User Interface for Teleoperation 1 smartphone. For this reason a Android application can be used within an existing ROS setup and serve as User Interface or even as base computer. SIR is a teleoperated inspection robot for the visual inspection of ships. SIR inspects the hull condition from the inside of a transport ship. It is operated with a computer Graphical User Interface (GUI) and a gamepad for the control of the robot s speed and the mounted camera. SIR uses the ROS setup for its control. 1.2 Goal of this Thesis The goals of this Bachelor thesis are the following: Design and implement a Graphical User Interface on a tablet for SIR Analyse the possibility of a tablet control device for SIR Use the combination of Java and ROS (rosjava) for telerobotics Show the possibilities and limitations of rosjava in a Android application The implementation and design of a GUI for SIR as an Android application is the main goal of this thesis. To achieve this goal the combination of Java as programming language for the Android app and ROS for the robot control has to be used. Therefore the possibilities and limitations of an Android app for telerobotics can be revealed within the scope of this thesis. 1.3 Context This work is linked with the ETH Focus Project Ship Inspection Robot 2012/13. Within the Focus Project the robot SIR (see Fig. 1.2) was developed. SIR is a visual inspection robot for the maritime sector, which has the possibility to overcome different obstacles in a vessel thanks to its overlapping wheel alignment. For more

13 Structure of this Report Figure 1.2: SIR from the Focus Project information about SIR and the Focus Project see Chap. 2. SIR serves as base for this work. 1.4 Structure of this Report This report is structured in 6 major chapters. A short overview about the robot SIR and the Focus Project is given in Chap. 2. Related works in GUI design and telerobotics are shown in Chap. 3. The implementation of the GUI for SIR on a tablet is discussed in Chap. 4. In Chap. 5 the analysis of the work is revealed. A conclusion together with an outlook is given in Chap. 6.

14 Chapter 1. Introduction 4

15 Chapter 2 Ship Inspection Robot Ship Inspection Robot (SIR) is a new tool for visual ship inspection. With his overlapping magnetic wheel alignment SIR can overcome almost every possible profile that can be found in a vessel. The mounted camera delivers a live video feed from the inside of the ship and can take high resolution images for detail view. The prototype SIR was developed during the Focus Project Ship Inspection Robot 2012/13 at ETH Zürich. Most information is from the Final Report of the project (c.f. [10]). 2.1 Focus Project Ship Inspection Robot An interdisciplinary team (see Fig. 2.1) of students from ETH Zurich and ZhdK developed the robot SIR within the scope of a Focus Project from the Department of Mechanical and Process Engineering (D-MAVT) at ETH Zurich. A focus project is a learning project for students where they apply their knowledge from their sudies during two semesters. The interdisciplinary team SIR involves 10 students: 6 mechanical engineering students from ETH Zurich 2 electrical engineering students from ETH Zurich 2 industrial design students from ZhdK 2.2 Task of SIR Vessels in the maritime sector need a yearly visual inspection to get the license for transportation and entering a harbour. The visual inspection is performed by a trained ship inspector, who examines the ship hull for structural damages and deterioration caused by corrosion, material defects etc. To reach every spot within the ship, temporary stages, forklifts, ropes and other tools have to be used. The inspection process is not very safe and the preparation is very time-consuming. Therefore the inspection is very costly. 5

16 Chapter 2. Ship Inspection Robot 6 Figure 2.1: Focus Project SIR Team Photo The aim of SIR is to support the inspector during visual inspection. SIR is an inspection tool, which delivers images from the inside of the vessel, therefore no temporary stagings have to be built and the inspector can examine the structure safely from the outside of the ship. The hull structure of ships contains many profiles (L-, T-, I and P-shaped profiles, see Fig. 2.2) because of stability reasons for the ship. SIR has to overcome these profiles to deliver pictures of every spot. Figure 2.2: Inside of a Ship with Profile Structure

17 Setup of SIR 2.3 Setup of SIR Figure 2.3: SIR with overlapping Wheel Alignment To overcome the characteristic structure of the ship hull, SIR has a new innovative overlapping wheel alignment (see. Fig. 2.3). The wheels are magnetic, therefore SIR can climb the steel hull of a ship without being influenced by gravity. The most important information about SIR are summarised in the following list. For more technical details read the Final Report of the project, [10]. 4 Magnetic wheels, individual actuated by a motor Overlapping wheel alignment Separating mechanism for concave edges, necessary because of the magnetic force from the wheels GoPro camera for live video stream and HD images asymmetric pan/tilt camera mount Appealing design which delivers the values: reliable, simple, compact and visual inspection Modular setup 3 LEDs stripes 4 infrared distance sensors IMU Wireless operation with radio connection and battery (possibility for wired operation) On-board motor-controllers On-board mbed-controller for sensors and light External host computer with ROS 1 for the robot control 1 Robot Operating System, see

18 Chapter 2. Ship Inspection Robot 8 Graphical User Interface on the Computer Steered with a gamepad 2.4 ROS Structure Ship Inspection Robot consists of many independent subsystems such as the drivetrain or sensor data acquisition. Therefore, a modular approach for the software development is chosen with the Robot Operating System (ROS). ROS provides libraries and tools for the development of robot applications. A feature of ROS is to encapsulate independent modules into packages. The packages or nodes communicate via so called topics, named busses, where messages can be exchanged. Messages are input commands to the robot or feedback values etc. The Graphical User Interface (see Chap. 4), developed within this thesis, utilizes the ROS setup of SIR. As ROS node, the GUI can gather the messages published to the topics and also publish messages. A detailed overview about all messages and their topics is gathered in Tab. 2.2 and Tab The tables are divided into output (Tab. 2.2) and input (Tab. 2.4) values. Characteristic for SIR is the small number of possible inputs, only the linear and angular speed of the robot, the light and the camera is controlled. On the other hand the robot delivers a lot of feedback values with its sensors, based on which a good situation awareness can be established.

19 ROS Structure hardware output values freq. [ms] unit ROS message ROS topic name camera video feed sensors IMU linear acceleration angular speed magnetic flux vec3 [x,y,z] vec3 [x,y,z] vec3 [x,y,z] 10 m/s 2 SIRImu /Sensors/Imu 10 rad/s SIRImu /Sensors/Imu 100 mgauss SIRMagneticField pressure vec1 500 mbar SIRPressTemp /Sensors/PressTemp temperature vec1 500 Celsius SIRPressTemp /Sensors/PressTemp position calculated vec3 deg Vector3 /Sensors/AngOrientation IR distance vec4 50 cm SIRDistances /Sensors/Distances power board voltage vec1 50 V SIRPowerSupply /Sensors/PowerSupply current vec1 50 A SIRPowerSupply /Sensors/PowerSupply motors (4x) speeds vec4 10 rps DrivetrainData /sir/base controller/drivetrain data torques vec4 10 mn m DrivetrainData /sir/base controller/drivetrain data revolutions vec4 10 rev DrivetrainData /sir/base controller/drivetrain data wheel radius vec1 10 m DrivetrainData /sir/base controller/drivetrain data number of motors vec1 10 DrivetrainData /sir/base controller/drivetrain data gear ratio - ideal vec1 10 DrivetrainData /sir/base controller/drivetrain data gear ratio - effective vec1 10 DrivetrainData /sir/base controller/drivetrain data Radio Connection infos vec22 SIRWirelessStats /ETHRadioStats Table 2.2: Output Messages of SIR in ROS

20 Chapter 2. Ship Inspection Robot 10 Hardware input value freq. [ms] unit ROS msg ROS topic name LED LED cam Intensity [0-255] LED left Intensity [0-255] LED right Intensity [0-255] vec1 [0-255] vec1 [0-255] vec1 [0-255] - SIRsetLEDs mbed com/leds - SIRsetLEDs mbed com/leds - SIRsetLEDs mbed com/leds Robo linear speed angular speed vec3 [x,0,0] vec3 [0,0,z] 50 cm/s Twist base controller/cmd vel 50 rad/s Twist base controller/cmd vel camera pan position vec1 SIRsetCAM mbed com/pan tilt tilt position vec1 SIRsetCAM mbed com/pan tilt start video-stream Service start video mode Service start photo mode Service flip picture Service shoot picture / start Service record stop record Service enable preview Service preview Service Position reset calculated position Service Table 2.4: Input Messages to ROS for SIR

21 Chapter 3 Related Work Robotic applications are introduced in industry, space, undersea, surgery, military operations and in many other fields. A special group of robots are mission robots which are designed to perform very complex tasks. These robots are usually able to navigate through the environment and interact with ther surroundings. Regarding this complexity these robots are not autonomous most of the time, which means, that the robot is controlled and supervised by a human. [1] This form of human robot interaction (HRI) is called Telerobotics and introduced in the following section. 3.1 Telerobotics Telerobotics and more generally teleoperation describes a person s capability to interact with a remote location. Telerobotics is defined by Sheridan in the following way: Telerobotics is a form of teleoperation in which a human operator acts as a supervisor, intermittently communicating to a computer information about goals, constraints, plans, contingencies, assumptions, suggestions and orders relative to a limited task, getting back information about accomplishments, difficulties, concerns, and, as requested, raw sensory data while the subordinate telerobot executes the task based on information received from the human operator plus its own artificial sensing and intelligence. [9] Overview about Telerobotics In the 1940s the first modern master-slave teleoperators were developed. One of the first impressive teleoperator was Handyman (1958) with two electrohydraulic arms and 10 DOF. (Fig. 3.1) [9] An important scope are teleoperated vehicles, which are used in the air, on the ground and under water. The first teleoperated aircrafts were used for anti-aircraft-training, like the RP-5 in 1 from [9], Page

22 Chapter 3. Related Work 12 Figure 3.1: Handyman, built at General Electric Co Today Unmanned Air Vehicles (UAV) are the most widely used teleoperated air vehicles. They are used for surveillance, remote sensing, domestic policing, target identifications, armed attacks in combat etc. Teleoperated ground vehicles are often designed for science tasks, such as exploration and sample collection. They are also used to inspect hazardous environments like nuclear reactors 2. [5] An actual example for a teleoperated ground vehicle is the MARS rover Curiosity (Fig. 3.2), which landed in August 2012 on the MARS. Its mission is to determine if the red planet had an environment able to support small life forms (microbes). Figure 3.2: Mars rover Curiosity, a teleoperated ground vehicle 3 Unmanned submersibles so called Remotely Operated Vehicles (ROV) exist since the early 1900 s. With the success to recover a nuclear bomb (accidentally dropped 2 The pioneer robot was designed to enter the Chernobyl nuclear power plant in [7] 3 from PIA14164-br2.jpg, 27/04/2013

23 User Interface from an airplane) from the deep ocean bottom by the U.S. Navy s CURV vehicle in 1966 [9], the commercial development started. Today they represent the largest market. ROV s are used for survey, inspection, oceanography and many other tasks. [5] 3.2 User Interface The information about the surrounding, location, activities and status of a teleoperated robot is gathered solely through the interface.[6] Hence the user interface plays an important role in the human robot interaction and is a key to a fast and successful mission. Important terms, a general overview and lessons learned from designing interfaces are summarised in the following Situation Awareness The knowledge about the state of the robot and the actual environment is very important for efficient remote control. Telepresence and situation awareness (SA) are referred to the operators awareness of the robots environment. [8] Endsley defines situation awareness as: The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future. [4] An additional definition can be:...awareness is an understanding of the activities of others, which provides a context for your own activity. [3] Situation awareness can be divided in three levels: [1] Level 1: Perception The operator perceives cues in the environment. That is, being able to notice important information. Level 2: Comprehension Integration, storage, and retention of information. This involves not only finding parts of information, but also making sense of them. Level 3: Projection Forecast future situation events and dynamics from the current situation. It allows for timely actions and is a characteristic of an expert user. Therefore situation awareness not only implies to be aware about the remote environment, the understanding about the significance of the presented elements and hence to make further decisions are part of the idea around situation awareness. Factors which influence situation awareness are: workload, system factors and environmental factors. Related to situation awareness are other robot-interaction tasks such as neglection, interaction time and fan out. [1]

24 Chapter 3. Related Work Overview about User Interface The user interface is the remote operator s tool to communicate with the robot. It delivers sensor data, pictures and other information collected by the robot to the user. The user may send inputs/commands to the robot. User interfaces are separated into four categories by Fong/Thorpe [5]: Direct The operator directs the vehicle via hand-controllers while watching video from vehicle mounted cameras. (Fig. 3.3) Multimodal/Multisensor Multimodal interface provides the user with a variety of control modes (individual actuators, coordinated motion, etc.) and displays (text, visual, etc.). Multisensor displays combine information from several sensors or data sources to present a single integrated view. Supervisory Control In supervisory control the operator divides the problem into sequences of subtasks which the robot executes on its own., which is the reason why the robot must have some level of autonomy. Novel Novel interfaces use new technologies and ideas. For example: hands-free remote driving interface based on brainwaves and muscle movement monitoring, inputs through gesture recognizing, web-based interfaces, PDAs, etc. Figure 3.3: Example for a direct interface 4 Inputs to the interface can be given through different devices. devices are: [1] Important input Keyboard Mouse Joystick Touchscreen Tablet displays Audio input Motion tracking 4 from [5], page 10

25 User Interface User Interface Design To improve a user s situation awareness the right design is very important. It is shown, that a good user interface, therefore a better situation awareness can decrease the overall mission time. [8] Several studies showed that it is a challenge to design interfaces which provide a good situation awareness.[6] Based on experiments and studies Yanco et al. [12] presented some guidelines for designing interfaces in robot-human interaction. Keyes et al. [6] complemented these guidelines: A map of where the robot has been. Fused sensor information to lower the cognitive load on the user. Support for multiple robots in a single display (in the case of a multi-robot system). Minimal use of multiple windows. Spatial information about the robot in the environment. Help in deciding which level of autonomy is most useful. A frame of reference to determine position of the robot relative to its environment. Indicators of robot health/state, including which camera is being used, the position(s) of camera(s), traction information and pitch/roll indicators. A view of the robots body so operators can inspect for damage or entangled obstacles. Provide consistency; especially consistency between robot behavior and what the operator has been led to believe based on the interface. Provide feedback. Use a clear and simple design. Ensure the interface helps to prevent, and recover from, errors made by the operator or the robot. Follow real-world conventions, e.g., for how error messages are presented in other applications. Provide a forgiving interface, allowing for reversible actions on the part of the operator or the robot as much as possible. Ensure that the interface makes it obvious what actions are available at any given point. Enable efficient operation. Enable an understanding of the robot s location in the environment. Facilitate the operator s knowledge of the robot s activities. Provide to the operator an understanding of the robot s immediate surroundings. Enable the operator to understand the robot s status.

26 Chapter 3. Related Work 16 Facilitate an understanding of the overall mission and the moment-by-moment progress towards completing the mission. Nielsen et al. [8] proposed rather to focus on the display of effective environmental cues in the user interface than just to focus on the accuracy an environment is presented. 2D/3D Interfaces for Position Displays: The user performance for teleoperated tasks compared for 2D and 3D interfaces for the display of an object s position were part of different studies. Strict 3D displays can be effective for approximate relative position estimation and orientation. However, precise orientation and positioning are difficult. (...) For such precise tasks, combination 2D/3D displays were better than strict 2D or 3D displays. This concluded Tory et al. [11] in their experiment. Subjectively, the participants preferred the 3-D interface to the 2-D interface and felt that they did better, were less frustrated, and better able to anticipate how the robot would respond to their commands. The ability of the operator to stay further away from obstacles with the 3-D interface is a strong indication of the operator s navigational awareness. This was the result from Nielsen et al. s [8] experiments. Thereout they proposed three principles for a successful interface: present a common reference frame provide visual support for the correlation of action and response allow an adjustable perspective

27 Chapter 4 Implementation Based on the goal of this thesis a Graphical User Interface on a computer tablet was developed. The Graphical User Interface and all tools needed for the development are explained in this Chapter. All layers of the project and the app are explained, from the communication over the layout to the final product. 4.1 Basic Tools The Graphical User Interface, designed within the scope of this thesis, is implemented as an Android application on a tablet. The application is programmed in Java and uses rosjava for the communication with ROS Rosjava Rosjava is a pure Java implementation of ROS 1. It provides a client library that enables to quickly interface with ROS Topics, Services and Parameters. 2 The main idea of rosjava is to bring ROS on Android devices like smartphones and tablets. In 2011, rosjava was announced at the Cloud Robotics tech talk at Google I/O. In corporation with Willow Garage 3 rosjava was developed at Google. Rosjava is still under active development Android Android 4 is an operating-system primarily designed for touchscreen mobile devices. Android is open source. Thanks to Android s open nature, Android has many possibilities to be used in robotic applications. One new possibility is the interface 1 ROS (Robot Operating System) provides libraries and tools to help software developers create robot applications. It provides hardware abstraction, device drivers, libraries, visualizers, messagepassing, package management, and more. ROS is licensed under an open source, BSD license. see [19/6/2013] 2 [19/6/2013] 3 Willow Garage: developer of ROS, see 4 see 17

28 Chapter 4. Implementation 18 between Android and ROS (cf. Sec ). Applications ( Apps ) for Android are developed with the programming language Java Computer Tablet: Nexus 101 Figure 4.1: Android Tablet Nexus 10 5 For the GUI app the Android tablet Nexus 10 (see Fig. 4.1) is used. The Nexus 10 has a good screen size of 10 Inch, combined with light weight (see Tech. Specifications in App. A.1). Therefore Nexus 10 was chosen for the GUI app. 4.2 Layout of the Graphical User Interface Figure 4.2: The Graphical User Interface for SIR Fig. 4.2 shows the Graphical User Interface for SIR on the tablet. The GUI is divided into six parts and contains diverse elements to control the robot and to show information about the robot s status. 5 from [19/6/2013]

29 Layout of the Graphical User Interface Basic Layout of the GUI Figure 4.3: Blank Layout of the GUI The blank layout of the GUI is shown in Fig The layout is split into six parts, which are indicated by different colors. In the middle of the layout is the central main frame. In this frame the most important information can be displayed to the operator. With tabs, the user has the possibility to display arbitrary information in the middle. Therefore he can change the information based on his actual need. With only a single main frame the cognitive overload for the user is minimized. Around the main frame are three additional peripheral windows placed. These windows can contain information, which do not need the permanent attention from the operator but have still their purpose to always be displayed like the battery status. These windows contain as well elements which are used in parallel to the main window, like the light control. The operator has these windows in his peripheral sight view. Place is reserved for control units like virtual joysticks in the right and left bottom corner of the tablet device. The windows are placed good reachable for the user s thumbs based on ergonomic consideration. Therefore a comfortable handling is allowed by the user. In Android, the basic layout is defined in a xml-file (see App. List. A.1) with nested LinearLayouts 6 and RelativeLayouts 7. The layout with tabs is defined in a separate xml-file (see App. List. A.2). Nested F ramelayouts 8 in a LinearLayout surrounded by a T abhost 9 are used for the display of the tabs. The display of the correct tab and the change on touch is controlled by a own class called TabsFragment (see App. List. A.3) The layout for the Graphical User Interface was developed on paper. In a next step it was visualised with Photoshop and afterwards realised in the Android app through xml-files. The development of the layout is visualised in Fig see 7 see 8 see 9 see

30 Chapter 4. Implementation 20 Figure 4.4: Hand Sketch Figure 4.5: Photoshop Sketch Figure 4.6: Realised Layout Besides the realised layout with the central tab-widget, there were other ideas. But these ideas were discarded because they were too complicated to develop or too unintuitive or had too much overload for the operator. An overview about the discarded layout can be found in the App. A.2. The guidelines from Keyes et al. [6] (c.f. Sec ) were always at hand, during the development of the layout for the GUI Elements in the Graphical User Interface The Graphical User Interface has different elements to control the robot and to display information based on sensors mounted on the robot (c.f. Sec. 2.3). All elements and their purpose are explained in Tab Details about their programming can be found in Sec virtual joysticks were implemented to drive the robot and to control the position (pan/tilt) of the mounted camera. Additional to the joystick are 2 switch buttons to decrease the initial 2 DOF for exact movements in just one direction. 3 sliders control the brightness of the according LEDs on the robot. Display of the video feed central in the middle of the GUI.

31 Layout of the Graphical User Interface A 3D model of SIR displays the position of the robot according to the gravity vector. It could be possible to turn the 3D view into 2D for preciser orientation with the appropriate camera setting. A simple button is included to reset the position of the 3D model to the origin. SIR has 4 separate motors. A dynamic plot displays each speed of the motors in one plot for good comparison and monitoring. The motors do not just give a feedback about their current speed, as well about their torque. The torques are displayed in a dynamic plot. This plot helps to recognise if any motor is blocked. The consumed current is displayed in a dynamic plot. The battery charge, the actual voltage, is displayed as number. Because the number is not meaningful for the most, the battery charge is displayed in a fillbar which indicates easy the actual charge. The distance too an obstacle in front of the robot is indicated with 2 fillbars for the left and right infrared sensor at the bottom of the robot. SIR has mounted 4 infrared sensors (c.f. REF) but just 2 are displayed, because the others are not calibrated. Table 4.1: Overview about the Elements in the GUI

32 Chapter 4. Implementation Organisation of the Application Project The development of Android applications is made with the IDE Eclipse, because the Android Development Tools (ADT) exist as plugins for Eclipse. With them it is easy to create a new Android project. For the rosjava integration four separate project are build in Eclipse, which contain all the code related to rosjava. Therefore the Android application project is strictly separated from rosjava. The rosjava projects are linked with the GUI project (see Fig. 4.7) Figure 4.7: Linked rosjava and GUI projects in Eclipse The separated Eclipse projects are explained in more detail in the following list: GUI App This project contains the actual Android application with all drawables, classes etc. For more details cf. Sec.4.5. ros Here are the classes for the interface with the roscore. ros android The Android Activity for the communication with ROS is in this project. ros bootstrap In this project are the classes for the ROS message handling. ros message All ROS messages and services are build in this project for Java. Therefore it is possible to generate his own ROS messages and services. xmlrps Classes for the Apache XML-RPC protocol (see Sec. project. 4.4) are in this To setup a new Android application in Eclipse is straight forward. The integration of the rosjava bindings distributed over four project is more tricky, because there are a lot of connections necessary between the projects. These complex dependencies are a point for improvements for the usage of ROS in Android applications.

33 Communication Roscore - Android App Figure 4.8: Schematic Communication between the App and the ROScore 4.4 Communication Roscore - Android App The Android Application with rosjava (c.f. Sec ) communicates wireless with the ROScore 10 running on the host computer (see Fig. 4.8). For the communication protocol rosjava uses the XM L RP S protocol with HT T P 11. XML-RPS: XML-RPC is a Remote Procedure Calling protocol that works over the Internet. An XML-RPC message is an HTTP-POST request. The body of the request is in XML. A procedure executes on the server and the value it returns is also formatted in XML. Procedure parameters can be scalars, numbers, strings, dates, etc.; and can also be complex record and list structures. 12 The IP-adress of the host computer in the WLAN with the Android app, has to be set as ROS environment variable ROS IP 13 before starting the roscore. 1 export ROS_IP =" HERE THE IP" In the Android app with rosjava the IP address of the host computer has to be entered with the according port (: 11311) for the roscore at the beginning. In the Android app with rosjava the input window for the IP to the roscore is a standard component of rosjava defined in the ros android library (c.f. Sec. 4.3). The input of the connection datas to the roscore could be better and nicer integrated in the Android application project. This is a point for further improvements. 4.5 Realisation of the App The Android GUI app named SIRController for SIR is implemented in pure Java. The most important parts from the application, generally described in Sec. 4.2, are discussed from a programmatically point of view in the following subsections. The whole application project can be found as git 14 project on https://github. 10 see 11 see 12 from [24/6/13] 13 see 14 git is a distributed version control and source code management system. See

34 Chapter 4. Implementation 24 com/jonas-/sircontroller.git. Some source code parts are added to the appendix (see App. A.4). A special focus lies in this section on the interface between the GUI components and ROS. Described are the components: Virtual joystick Video display 3D model of the robot Different plots (speed, torque, current) Light control Distance sensors and battery status Event Handling In the application the GUI components are strictly separated from the code which contains the Java ROS nodes for publishing and subscribing. The GUI components like Buttons interact with the ROS nodes trough events. In Fig. 4.9 you see the schematic principle of the event handling. Figure 4.9: Schematic Event Handling The user generates an input. This input triggers an event as a custom event, which can transmit different values, defined in the Event Handler. The Java ROS Node listens to this event and can, based on the event, send some ROS messages or services. The procedure works the same, when a ROS subscriber triggers an event, which changes for example some values in the GUI. The user input, often a touch, is as well an event, a standard Android event, with predefined event handler. The code for a event triggering is shown in List // ---- Event handler private List _ listeners = new ArrayList (); 3

35 Realisation of the App public synchronized void addeventlistener ( LightHandlerListener listener ) { 5 _listeners. add ( listener ); } 7 public synchronized void removeeventlistener ( LightHandlerListener listener ) { _listeners. remove ( listener ); 9 } 11 private synchronized void firelightvalues ( int cam, int left, int right ) { LightHandler event = new LightHandler ( this, cam, left, right ); 13 Iterator i = _ listeners. iterator (); 15 while (i. hasnext ()) { (( LightHandlerListener ) i. next ()). handlelightevent ( event ); 17 } } 19 // Listing 4.1: Code for event triggering There is a function for adding and removing event listeners. Hence the event can be distributed among arbitrary listeners. The function f irelightv alues() creates a new predefined custom event (see List. 4.2) and sends it to the listeners. 1 import java. util. EventObject ; 3 public class LightHandler extends java. util. EventObject { 5 int cam, left, right ; 7 public LightHandler ( Object source, int cam, int left, int right ) { super ( source ); 9 this. cam = cam ; this. left = left ; 11 this. right = right ; } 13 public interface LightHandlerListener { 15 public void handlelightevent ( LightHandler handler ); } 17 } Listing 4.2: Event Handler class 2 A listener (see List. 4.3) implements the event handler class, which contains an interface, a abstract class, that has to be implemented and receives the event. public class LightPublisher implements LightHandlerListener {

36 Chapter 4. Implementation 4 public void handlelightevent ( LightHandler handler ) { /* HERE : react on Event */ 6 } } 8 } Listing 4.3: Event Listener Virtual Joystick The Virtual Joystick is a digital implementation of a joystick with 2 DOF, known from e.g. a Playstation controller. In this app two joysticks are used to control the robot speed and the camera position. The GUI element and the Java ROS node for publishing are described closer. GUI Element Figure 4.10: Virtual Joystick The Virtual Joystick (see Fig. 4.10) is implemented as class so it can be reused: VirtualJoystick ( RelativeLayout layout, int midpointx, int midpointy, int radius, String name ) The class needs as input: The layout where it gets attached The midpoint His radius for the maximum movement A name, so the events from different instances can be distinguished The joystick contains a red bitmap, which can be moved around in the predefined circle, based on a touch event and serves therefore as the virtual knob of the joystick. The position of the bitmap knob is controlled by the function updateposition() (see App. List. A.4). Additionally, the virtual joystick has two sliders. With these sliders the axis can be locked and therefore the initial 2 DOF reduced. The sliders call the function lockaxis(char axis, boolean bol). Due to these sliders it is e.g. possible to make a theoretical perfect turn-on-spot with the robot. When the joystick changes its position, it sends an event with its new position:

37 Realisation of the App 1 private synchronized void fireposition () { // Log.i(" Joy ", " fireevent called "); 3 JoyHandler event = new JoyHandler ( this, getdx (), getdy (), _name ); Iterator i = _ listeners. iterator (); 5 while (i. hasnext ()) { (( JoyHandlerListener ) i. next ()). handlejoyevent ( event ); 7 } } Listing 4.4: Event from Virtual Joystick A Java class can listen to this fired event. In the case of the app SIRController a ROS node listens to the Virtual Joystick 2 Generate ROS message One Virtual Joystick was implemented to generate a ROS twist message for the robot speed. A second joystick is used for the camera pan/tilt control. Now the basic principle how to generate a ROS message in Java is introduced. A general ROS node in Java extends the base class AbstractNodeMain from the ros project (c.f. Sec. 4.3) and has the following form: public class SpeedCmdRobo extends AbstractNodeMain 4 public GraphName getdefaultnodename () { return GraphName.of(" SomethingLikeTheNodeName "); 6 } 8 // when ROS connected to 10 public void onstart ( final ConnectedNode connectednode ) { 12 } In the function onstart(final ConnectedNode connectednode) you can subscribe or publish to a ROS topic. publisherrobo = connectednode. newpublisher (" sir / base_controller / cmd_vel ", geometry_msgs. Twist. _TYPE ); Listing 4.5: Example for a Java ROS publisher For publishing the proper message, according to the publisher, has to be generated and can then be published with the command: publisher.publish(message) 1 geometry_ msgs. Twist twist = publisherrobo. newmessage (); twist. getlinear (). setx ( this. spdx ); // set linear speed 3 twist. getangular (). setz ( this. spdz ); // set angular speed publisherrobo. publish ( twist ); Listing 4.6: Createing ROS message in Java and publish

38 Chapter 4. Implementation 28 In the case of the speed control for SIR a constant publishing rate of 50 Hz is necessary. Therefore an extra timer task is needed, because the user does not generate with the virtual joystick constant events. The last input from the joystick is stored and constantly published through a timer with a custom TimeTask (see List. 4.7). // // Task manager for constant publishing private class ttask_ class extends TimerTask { 4 private float spdx ; private float spdz ; 6 8 private Publisher < geometry_ msgs. Twist > publisherrobo ; public ttask_ class ( Publisher < geometry_ msgs. Twist > d ) { 10 this. spdx = 0; this. spdz = 0; 12 this. publisherrobo = d; } 14 public void setnewspeeds ( float spdx, float spdz ) { 16 synchronized ( this ) { this. spdx = spdx ; 18 this. spdz = spdz ; } 20 } Override public void run () { 24 geometry_ msgs. Twist twist = publisherrobo. newmessage (); synchronized ( this ) { 26 twist. getlinear (). setx ( this. spdx ); // set linear speed twist. getangular (). setz ( this. spdz ); // set angular speed 28 } publisherrobo. publish ( twist ); 30 } } 32 // // when ROS connected 36 public void onstart ( final ConnectedNode connectednode ) { 38 publisherrobo = connectednode. newpublisher (" sir / base_controller / cmd_vel ", geometry_msgs. Twist. _TYPE ); 40 ttask = new ttask_ class ( publisherrobo );

39 Realisation of the App 42 mtimer = new Timer (); mtimer. schedule ( ttask, 0, 20) ; // publish with const. 50 Hz 44 } Listing 4.7: TimerTask for constant publishing D Robot Model In order to give the operator a feedback about the position of the robot, a 3D model of the robot was implemented. A 3D model is chosen against 2D implementations, because experiments showed (c.f. Sec ), that a 3D view is preferred by operators for better situation awareness. With the right camera setting there is the possibility to turn the 3D model view in a 2D view, which supports very precise navigation, asumed to get precise values. Additional to the model, a position reset button is implemented, which calls a ROS service and sets the robot position back to the origin. Robot Model Figure 4.11: Simplified 3D Model of SIR For the 3D view of the robot the Rajawali 15 3D engine for Android is used. Rajawali is based on OpenGL ES With Rajawali it is easy to load surface files like.obj- or.stl-files and to handle 3D objects. From the used CAD program NX 17 there is the ability to export.stl-files from an assembly. For the 3D model in the Android app a simplified model (see Fig. 4.11) of SIR was designed, which has still the characteristic outer geometry. A simplified model with less vertexes was necessary for viable performance. In List. 4.8 the source code for the parsing of the.stl-file is printed. 1 // parse object from. stl StlParser stlparser = new StlParser ( mcontext. getresources (), mtexturemanager, R. raw. model_sir_stl ); 3 15 see 16 see 17 see

40 Chapter 4. Implementation 30 try { 5 stlparser. parse (); } catch ( ParsingException e) { 7 e. printstacktrace (); } 9 // create Object from. stl file 11 mobject = stlparser. getparsedobject (); Listing 4.8: Parsing.stl-file with Rajawali Create ROS Service To reset the position of the 3D model a ROS service was implemented which resets the external calculated position of the robot. The source code for the ROS service of the 3D model is listed in the App. List. A.5. The ROS service requires: a ServiceClient for the connection with the right service a ServiceResponseListener for the possibility to react on the response and a Request based on the Service Video Picture Figure 4.12: Video Screen in the GUI The camera picture from the mounted GoPro camera is displayed in the GUI app (see Fig. 4.12). The camera image is published raw in ROS. For the display in the GUI it needs to be compressed. Therefore it is republished by the image transport 18 node on the computer: 1 rosrun image_ transport republish raw in :=/ TopicName compressed out := TopicNameCompressed 18 see

41 Realisation of the App The ros android library (c.f. Sec. 4.3) contains the class BitmapFromCompressedImage. With this class it is possible to decode the ROS message of type sensor msg.compressedimage to a bitmap usable in the Android application: 1 BitmapFromCompressedImage bitfromcom = new BitmapFromCompressedImage (); Bitmap bit = bitfromcom. call ( handler. image ); This converted bitmap can be added to an ImageV iew, an Android container for images. By adding fast new images, a video is created. Android does not allow to change the content of the GUI from an other thread than it was created. Therefore the updating of the image, based on a event from the ROS subscriber, has to be done within the Android function runonuithread() (see List. 2 public void handlevideo ( VideoHandler handler ) { 4 final Bitmap bit = bittocom. call ( handler. image ); 6 runonuithread ( new Runnable () { public void run () { 8 video. setbitmap ( bit ); } 10 }); Listing 4.9: Update Video Picture inside the runonuithread(function) Video Stream - Latency The video display is not very smooth and has some delay. Tab. 4.2 lists the points, where a delay is caused. # What delay [s] improvement 1 radio analog transmission 0.3 transmission device from the robot 2 A/D conversion 2 transmit digital signal, hence no conversion 3 publishing in ROS 1 possibility to use the videostream without ROS 4 republishing in ROS from raw to compressed 0.5 corrects publishing at the beginning 5 wireless transmission from halts free connection, no other the computer to the tablet data flow 6 handling, conversion from 0.1 better suitable format for bitmap and display in the display in the app app 4.1 Table 4.2: Caused Latency in the Video Stream Point 1-3 are due to the robot and can not be changed without a redesign of the robot. The transmission of the analogue video signal to the base computer and the

42 Chapter 4. Implementation 32 publishing in ROS cause over 3 s delay. No effective teleoperation is possible with this delay. A redesign is inevitable for a usable teleoperation with the video picture. An additional delay is caused due to the fact, that the image is published in the wrong format for the display in the app and needs to be republished compressed. This delay can be prevented with a correct publishing at the beginning. These losses are caused before the Android app even interacts with the roscore. Comparison with a live video-feed from the laptop camera, where no A/D conversion and republishing is necessary, showed that just a negligible latency is caused from the laptop to the tablet. Therefore the display and conversion in the Android app is suitable for teleoperation. There may be some improvements in the source code of the app, but have just small impact on the latency. The video stream is not smooth and has some halts, when the wireless connection between the tablet and the roscore is used for other activities, like the connection of the laptop to the world wide web. Therefore the wireless connection should be used only for the data transmission between the app and the roscore. A fluent video stream is essential for an effective teleoperation and need some improvements on the hardware and software Dynamic Plots Figure 4.13: Dynamic Plots For the display of the motor datas from SIR, the motor speed and torque, as well for the current display, dynamic plots (see Fig. 4.13) are implemented in the GUI app. The plots are created with the API AndroidPlot 19. PlotClass ( FrameLayout layout, String datasource, String datalabel, int numval ) The PlotClass for the speed, torque and current plots needs: The layout where it gets attached A String for the data source: speed, torque or current 19 see

43 Realisation of the App A String for the legend label The number of dimension for the different values The PlotClass uses the XYPlot class from the AndroidPlot library. The dynamic movement is realised with adding the new value at the beginning of the plot data source and redrawing the plot (see List. 4.10). The plot has a certain length. Therefore, for better performance, just the values get stored, which can be displayed. The old values get removed. 1 for ( int i =0;i<N; i ++) { _seriescontainer [i]. addfirst ( data [i], data [i]); // add new values at the beginning 3 // free values at the end 5 if( _seriescontainer [i]. size () >_maxx ) { _seriescontainer [i]. removelast (); 7 } } 9 _plot. redraw (); Listing 4.10: Realisation of the Dynamic Plot Because the PlotClass is reused for speed, torque and current plots, following functions are included, so that the plot is customizable: settitle(string title) setyboundaries(int min, int max) setylabel(string label) setxlabel(string label) My PlotClass is reusable for speed, torque and current plot. But it is not totally object-oriented because the connection to the data source, in this case to the ROS node, is implemented in the class. For a general usage the datas should be added externally with a new function. The plot moves whenever a new value is added. Therefore it does not perfect plot over time, because the values from ROS do not arrive with a perfect constant frequency. A timer should be implemented, like in the VirtualJoystick class (c.f. List.. 4.7) Light Control Figure 4.14: Light Control in the GUI Ship Inspection Robot has 3 different LED stripes mounted. One at the camera and two at the front bottom. All LEDs are dimable between the values [0, 255]

44 Chapter 4. Implementation 34 through a ROS message. Therefore a light control (see Fig. 4.14) is implemented in the GUI app. The light control is specific for the robot SIR. The diming is realised through the Android component SeekBar 20. With the OnSeekBarChangeListener an event is called, when the value of the Seek- Bar changes. So it is possible to send a custom event to the Java ROS node with the new value. See List. 2 public void onprogresschanged ( SeekBar seekbar, int progress, boolean fromuser ) { // read values from bars 4 _ valuecam = _ slidercam. getprogress (); _ valueleft = _ sliderleft. getprogress (); 6 _ valueright = _ sliderright. getprogress (); 8 // send Event firelightvalues ( _ valuecam, _ valueleft, _ valueright ); 10 } Listing 4.11: Listener to SeekBar for the Light Control Fillbar for Infrared Sensors etc. Figure 4.15: Fillbars for Distance Indication SIR can measure its distance to an obstacle in front of the robot with infrared sensors. For the display of these sensor values a fillbar (see Fig. 4.15) is created. This fillbar can not only be used for the distance sensor, also for the battery status, the radio receiving power and many other applications. For the fillbar a own class is created: Fillbar ( RelativeLayout layout ) The class needs as input values: just the layout, where it gets attached Android would have the component ProgressBar 21, which fulfills a similar task, but this Android ProgressBar is hard to customize. Therefore the own Fillbar was developed. The fillbar class has the following functions: 20 see 21 see

45 Realisation of the App setrange(float min, float max) setvalue(float value) getvalue() setcolor(int color) setbackground(int color) setlabeltop(string label) setlabelbottom(string label) enablelabel(boolean bol) setsize(float width, float height) The fillbar is highly customizable, hence it can be used for many different values. The source code of the class is in App. List. A.6.

46 Chapter 4. Implementation 36

47 Chapter 5 Evaluation of the Application For successful teleoperation an effective human robot interaction is the key element. The operator needs a good situation awareness (c.f. Sec ). The information about the robot are gathered only through the Graphical User Interface. To evaluate how effective the operation of the robot with the tablet is, a test route was defined were users had to navigate with the robot. For comparison the trial was performed with the basic control device, the gamepad, and with the tablet. The appearance and clarity of the Graphical User Interface on the tablet is evaluated with a questionnaire about the user s impression. 5.1 Evaluation of the GUI The Graphical User Interface was evaluated qualitative with user feedback from test operators Implementation of the GUI Evaluation The evaluation of the GUI was implemented with an anonymous questionnaire (see App. A.5). After the usage of the application on the tablet with the robot, they had to score the following points between [1, 5]: Virtual Joystick: Robot performance Robot response (time) Camera performance Camera response (time) Design of the virtual joystick 3D Model: GUI: Impression of his own position estimation Design 37

48 Chapter 5. Evaluation of the Application 38 Clarity of elements Overall design My Situation Awareness The score 1 says poor. 5 stands for perfect Evaluation of the Results about the GUI Figure 5.1: Mean Values of the GUI Evaluation The average of the evaluated results from the GUI evaluation are showed in Fig All individual results from the evaluation can be found in the App. A.4. The results are analysed in the following: Virtual Joystick: The robot performance due to the input with the virtual joystick is with an average score of 3.71 ok and was discussed and evaluated in more detail with the test route (c.f. Sec. 5.2). The robot response time is good and is limited due to the not very stable (c.f [10]) radio connection. The result of the camera shows, that the camera is not really controllable. The input of the virtual joystick is too aggressive. The camera is actuated with servo motors. Therefore it is a position control and no speed control. Due to this fact it could be more suitable to include an other control element for the camera than the virtual joystick. Sliders or buttons could be better. The design is improvable. It suits for a prototype but for a final product it can use a redesign.

49 Navigation with the Robot 3D Model: A 3D model of the robot is implemented for the display of the robot against the gravity vector (c.f. Sec ). The user has with an average score of 4.43 a really good impression about his estimation of the robot s position in space. This good result is projected to the opinion about the design of the model. This result shows, that the approach with a 3D display is suitable at least for the impression of the user. For more precise navigation, additional 2D displays are possible (c.f. Sec ). But first the calculation of the robots position based on its sensor datas has to be more accurate than now. These calculations were based on another thesis (c.f. [2]). The Graphical User Interface: The result with an average score of 4.57 about the Clarity of the elements shows that the user is aware of what he gets displayed and what he can do with it. Based on a feedback, that the virtual joystick for the camera and the robot speed can not be distinguished a label was added. The overall impression about the graphical design of the user interface is good. The operator can work with the application. But the graphical design is not perfect and can be further developed for a market ready product and a good identification with the robot. Situation Awareness: The impression about the situation awareness presented through the GUI is ok with a score of This emphasize that the user is clear about which values he get displayed and what they mean. Most of the sensor values are not filtered and can be improved. In the case of the ship inspection robot a next step for increased situation awareness should be some evaluation with the end-user, the ship inspector. 5.2 Navigation with the Robot The Graphical User Interface on the tablet is used for SIR (c.f. Chap. 2). In the present state, SIR is not autonomous, hence the robot has to be steered with an input device by an operator. Basically SIR is steered with a gamepad and displays its information in a GUI on a computer. With the Android app on the tablet, developed within this thesis, their is the new possibility to operate the robot only with the tablet. The app serves as input device and display. The navigation of the robot with the gamepad or tablet is compared against each other on a test route (see Fig. 5.2) Implementation of the Navigation Comparison The test person has to drive from the start point to the end within boundaries as fast as possible. The robot starts forward, has to turn 180 and has to pass backwards a concave edge (see Fig. 5.4). The route is driven 3 times with each device (gamepad, tablet). The following list describes the task in relation to Fig. 5.3: 1. start forward and drive till the middle 2. turn-on-spot continue backwards

50 Chapter 5. Evaluation of the Application 40 Figure 5.2: Test Route for the Comparison of Tablet/Gamepad 4. pass the concave edge till the wheels touch the finish line Evaluation of the Navigation Comparison The results of the test drives against time with the gamepad and the tablet, generally described in Sec , are evaluated and discussed in this section. 35 Trial SIR with Gamepad Trial 1/3 Trial 2/3 Trial 3/3 Mean time 30 Mean time: s Time [s] Test Person Figure 5.5: Test Results with the Gamepad

51 Navigation with the Robot Figure 5.3: Test Route with Numeration Figure 5.4: Concave Edge within the Test Route Trial SIR with Tablet 35 Trial 1/3 Trial 2/3 Trial 3/3 Mean time Mean time: s Time [s] Test Person Figure 5.6: Test Results wit the Tablet 7 8

Developing Real Time Tracking of User Behavior, with Google Analytics for Mobile Phone Devices

Developing Real Time Tracking of User Behavior, with Google Analytics for Mobile Phone Devices Developing Real Time Tracking of User Behavior, with Google Analytics for Mobile Phone Devices Ahmed Hulo & Johnny To Master's Thesis Department of Design Sciences Lund University EAT 2015 Abstract Sony

More information

Advantages of using a virtual reality tool in shipbuilding

Advantages of using a virtual reality tool in shipbuilding Advantages of using a virtual reality tool in shipbuilding Verónica Alonso, SENER, Madrid/Spain, Rodrigo Pérez, SENER, Madrid/Spain, Luis Sánchez, SENER,

More information

Simulating and Deploying Home Automation Components in Intelligent Environments

Simulating and Deploying Home Automation Components in Intelligent Environments Department of Electrical Engineering and Information Technology Institute for Media Technology Distributed Multimodal Information Processing Group Prof. Dr.-Ing. Eckehard Steinbach Simulating and Deploying

More information

for Ground Mobile Robots

for Ground Mobile Robots International Journal of Advanced Robotic Systems ARTICLE A Taxonomy of Vision Systems for Ground A Taxonomy Mobile of Vision Robots Systems for Ground Mobile Robots Invited Feature Article immediate Jesus

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction Localisation is a fundamental issue in mobile robotics, since most applications of autonomous robots moving in any type of environment, require a posture estimate to fulfil their

More information

Technische Universität München Distributed Multimodal Information Processing Group. Prof. Dr. Matthias Kranz. Diplomarbeit

Technische Universität München Distributed Multimodal Information Processing Group. Prof. Dr. Matthias Kranz. Diplomarbeit Technische Universität München Distributed Multimodal Information Processing Group Prof. Dr. Matthias Kranz Diplomarbeit System zur Unterstützung effizienterer Seminarraumnutzung Author: Matriculation

More information

A 3D OBJECT SCANNER An approach using Microsoft Kinect.

A 3D OBJECT SCANNER An approach using Microsoft Kinect. MASTER THESIS A 3D OBJECT SCANNER An approach using Microsoft Kinect. Master thesis in Information Technology 2013 October Authors: Behnam Adlkhast & Omid Manikhi Supervisor: Dr. Björn Åstrand Examiner:

More information

Data Driven Development for Mobile Applications

Data Driven Development for Mobile Applications UPTEC IT 13 013 Examensarbete 30 hp Augusti 2013 Data Driven Development for Mobile Applications Oskar Wirén Abstract Data Driven Development for Mobile Applications Oskar Wirén Teknisk- naturvetenskaplig

More information

Software Asset Management

Software Asset Management Ville-Pekka Peltonen Software Asset Management Current state and use cases Helsinki Metropolia University of Applied Sciences Master of Business Administration Master's Degree Programme in Business Informatics

More information

Analysis, Design and Implementation of a Helpdesk Management System

Analysis, Design and Implementation of a Helpdesk Management System Analysis, Design and Implementation of a Helpdesk Management System Mark Knight Information Systems (Industry) Session 2004/2005 The candidate confirms that the work submitted is their own and the appropriate

More information

Issue Tracking Systems

Issue Tracking Systems MASARYK UNIVERSITY FACULTY OF INFORMATICS Issue Tracking Systems DIPLOMA THESIS Jiří Janák Brno, spring 2009 Declaration I, hereby declare that this paper is my original authorial work, which I have worked

More information

Google Apps as an Alternative to Microsoft Office in a Multinational Company

Google Apps as an Alternative to Microsoft Office in a Multinational Company Google Apps as an Alternative to Microsoft Office in a Multinational Company The GAPS Project Thesis presented in order to obtain the Bachelor s degree HES by: Luc BOURQUIN Supervisor: Thierry CEILLIER,

More information

GPS Forensics. A systemic approach for GPS evidence acquisition through forensics readiness. Vassilakopoulos Xenofon UNIVERSITY OF PIRAEUS

GPS Forensics. A systemic approach for GPS evidence acquisition through forensics readiness. Vassilakopoulos Xenofon UNIVERSITY OF PIRAEUS UNIVERSITY OF PIRAEUS DEPARTMENT OF DIGITAL SYSTEMS M.Sc IN DIGITAL SYSTEMS SECURITY GPS Forensics A systemic approach for GPS evidence acquisition through forensics readiness Vassilakopoulos Xenofon University

More information

Integrating Conventional ERP System with Cloud Services

Integrating Conventional ERP System with Cloud Services 1 Integrating Conventional ERP System with Cloud Services From the Perspective of Cloud Service Type Shi Jia Department of Computer and Systems Sciences Degree subject (EMIS) Degree project at the master

More information

KNX Scientific Conference

KNX Scientific Conference KNX Scientific Conference Environment control platform based on KNX and NFC technologies to support independent daily life Pamplona 4 th 5 th November 2010 1 Table of Contents Introduction Who are we?

More information


WHITE PAPER FOR PUBLIC TRANSPORT STAKEHOLDERS WHITE PAPER FOR PUBLIC TRANSPORT STAKEHOLDERS Based on the lessons learned in SECUR-ED This White Paper benefits from the conclusions of FP7 PROTECTRAIL ( Content 1. Introduction and

More information

Application-level simulation for network security

Application-level simulation for network security Application-level simulation for network security Stephan Schmidt, Rainer Bye, Joël Chinnow Karsten Bsufka, Ahmet Camtepe and Sahin Albayrak DAI-Labor, Berlin Institute of Technology,

More information

Real Time Person Tracking and Identification using the Kinect sensor Major Qualifying Project in Electrical & Computer Engineering

Real Time Person Tracking and Identification using the Kinect sensor Major Qualifying Project in Electrical & Computer Engineering WORCESTER POLYTECHNIC INSTITUTE Real Time Person Tracking and Identification using the Kinect sensor Major Qualifying Project in Electrical & Computer Engineering Matthew Fitzpatrick, Nikolaos Matthiopoulos

More information

Top 10 Reasons Faculty Fail When Using Blackboard CMS

Top 10 Reasons Faculty Fail When Using Blackboard CMS Top 10 Reasons Faculty Fail When Using Blackboard CMS Dr. Penny Johnson Computer Science Carroll College Abstract In today s ever increasing world of information technology it is not enough

More information

Motion Control for Newbies. Featuring maxon EPOS2 P.

Motion Control for Newbies. Featuring maxon EPOS2 P. Urs Kafader Motion Control for Newbies. Featuring maxon EPOS2 P. First Edition 2014 2014, maxon academy, Sachseln This work is protected by copyright. All rights reserved, including but not limited to

More information

Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating

Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating HIMSS EHR Usability Task Force June 2009 CONTENTS EXECUTIVE SUMMARY... 1 INTRODUCTION... 2 WHAT

More information



More information

Expert in Disaster Recovery Scenarios. 1. Introduction. Michel Verheijen and Marcel E.M. Spruit

Expert in Disaster Recovery Scenarios. 1. Introduction. Michel Verheijen and Marcel E.M. Spruit Expert in Disaster Recovery Scenarios Michel Verheijen and Marcel E.M. Spruit Many organizations rely heavily on the availability of their information systems. It is the responsibility of the disaster

More information

Building A Better Network Monitoring System

Building A Better Network Monitoring System Building A Better Network Monitoring System A report submitted in fulfillment of the requirements for the degree of Bachelor of Computing and Mathematical Sciences with Honours at The University of Waikato

More information

Social Media Analysis for Disaster Management

Social Media Analysis for Disaster Management Institute for Visualization and Interactive Systems University of Stuttgart Universitätsstraße 38 D 70569 Stuttgart Fachstudie Nr. 162 Social Media Analysis for Disaster Management Dang Huynh Nils Rodrigues

More information



More information

Designing workflow systems

Designing workflow systems TECHNISCHE UNIVERSITEIT EINDHOVEN Department of Mathematics and Computing Science MASTER S THESIS An algorithmic approach to process design and a human oriented approach to process automation by Irene

More information



More information

Network Monitoring with Software Defined Networking

Network Monitoring with Software Defined Networking Network Monitoring with Software Defined Networking Towards OpenFlow network monitoring Vassil Nikolaev Gourov Master of Science Thesis Network Architectures and Services Faculty of Electrical Engineering,

More information



More information