Interface design of a virtual laboratory for mobile robot programming U. Borgolte * FernUniversität in Hagen, M+I/PRT, Universitätsstr. 27, 58084 Hagen, Germany This paper presents architectural considerations for a remotely operable laboratory with a mobile robot. The laboratory is used in student education within the MSc programme Mechatronics at FernUniversität in Hagen. It forms the practical part of the module Mechatronics and Robotics. Main objective of the laboratory is to learn about behaviour based programming, and simultaneously learning about differences between simulation and reality. Keywords virtual laboratory; mobile robots; robot programming; simulation; practical experience 1. Introduction FernUniversität in Hagen is the only distance-teaching university in the German-speaking countries. As the principle of distance education is in the main focus of our university, minimization of physical attendance of students is crucial. On the other hand, this causes some problems in teaching. Hands-on experience is essential in engineering education. This is especially true for students starting their studies with no prior practice. At conventional universities, this experience is gained by laboratory experiments. Even in distance education, students very often have to travel to the university to access laboratory equipment. Remote operation of a laboratory requires a certain bandwidth to allow real-time data transfer and video streaming for monitoring of the lab. With the increased availability of broadband network connections, several remotely operable laboratories have been developed [1]. But these labs were restricted to set-ups where no motion or only restricted motion was possible. This paper presents ongoing work on a laboratory, where a mobile robot can be programmed to execute any possible movement in its environment without any restriction. Thus automatic measures have to be taken to ensure un-supervised operation of the lab. 2. Layout of the robot laboratory The students have to develop programs for the mobile robot Pioneer 3 AT by ActivMedia Robotics, now MobileRobots Inc [2]. The sensorial system of the robot consists of bumpers, ultrasonic sensors, a SICK laser scanner, and a video camera (fig. 1). Within the lab only the laser scanner is used. The ultrasonic sensors are not used because signal noise is very high in the narrow environment. The video camera is not used at this stage because processing of video pictures is far beyond the scope of the experiments. Fig. 1 Mobile robot Pioneer 3 AT. Fig. 2 Robot maze. * Corresponding author: e-mail: ulrich.borgolte@fernuni-hagen.de, Phone: +49 (0)2331 9871106 402
Students work in the lab should concentrate on the main topics and should not be hindered by complex disturbing influences. Therefore, instead of a natural environment, an artificial and fully controlled environment is used. The experimental plant is set up of a maze. The task of the robot is to drive through the hallways while avoiding collisions with the walls. The starting position is characterized by a specific width of the hallway. Once the robot starting position is reached again, the robot program shall identify it, rotate the robot into the original orientation, and stop all further movements. The maze s structure is extremely simple, with flat and perpendicular walls, well known angles and dimensions (fig. 2) [3]. With this simplification, the students are able to concentrate on basic principles of behaviour based programming, instead of struggling with strange effects of sensor readings and complicated geometric computations. 3. Laboratory for conventional operation As of today, the laboratory is still used in a conventional way as a presence laboratory. Therefore, students have to travel to the university; they can program the robot, execute the programs and intervene directly if anything goes wrong. There is direct access to the maze, and there is a supervisor to assist them. Programming is done in COLBERT, which is a designated programming language for behaviour based programming of robots. Its structure is very close to C, but much more simple and easy to understand. COLBERT offers quasi-parallel execution of activities. The programming environment is Saphira, once provided by the robot s manufacturer, MobileRobots. Though Saphira is no longer supported nor distributed by MobileRobots, it is still used in the lab as it works quite well for education purposes. A sample screenshot of the Saphira window is shown in fig. 3. Executing new programs on the robot is a little bit dangerous. Saphira offers the possibility to test programs before loading them to the real robot. Testing is done by establishing a connection between Saphira and SRIsim, which is a specialized simulation environment for devices from MobileRobots. The connection is via TCP/IP, with the same mechanisms as with the real robot. In SRIsim, environments can be defined by simple geometric commands. SRIsim reports back sensor signals corresponding to the situation of the simulated robot in this environment. Both ultrasonic and laser scanners are supported. To minimize the gap between simulation and reality, sensor signals are superimposed with artificial noise. In Saphira, the ultrasonic sensor readings are animated by dots, the laser readings by green lines (fig. 3). In addition to the Saphira and the SRIsim window, one more window showing the current status of the activities can be displayed. This information can be very helpful to test and debug programs. The whole programming environment is shown in fig. 4. Fig. 3 Saphira window. Fig. 4 Programming and simulation environment The communication structure is identical for simulation and application of the real robot. In general, Saphira communicates via a wireless link and standard Ethernet-protocol with the robot (fig. 5). The Pioneer is equipped with a PC board and a micro-controller board. The PC board is running Linux; it offers a user interface for direct interaction with the robot. For applications where no data link is available, Saphira can run on the robot-pc 403
directly. Thus a fully autonomous operation is possible. The movement commands are executed on the microcontroller board, where the feedback control is located, too. The communication between PC board and micro-controller board is serial via standard RS-232. For remote operation of the robot, the robot PC is barely more than an interface between the network link and the microcontroller (fig. 5). For Saphira, the robot-pc is invisible, there are no commands to the PC, and there is no feedback generated by the PC. Saphira WLAN Sensors Robot-PC Sensors Micro-controller Motors Fig. 5 General communication structure 4. Laboratory for remote operation If the laboratory is remotely operated, the most obvious change is in the location of the components of the programming environment. The PC where Saphira is hosted will be at the student s home, while the other parts are still in the real laboratory at the university. The student is able to simulate the programs in the same way is within the university, that is locally on his/her PC where Saphira is located. As soon as the student is connecting with the real robot, communication between Saphira and the robot-pc will be mediated by the Internet. This change will not influence any of the components directly. Every time a student wants to start the lab, it needs to be in its initial state. This has to be ensured 24 hours a day, 7 days a week, independently from the previous actions on the lab. Thus after a student finishes his/her experiment, the next student shall be able to start the lab in exactly the same way. If the experiment goes well and the task of the robot is executed as desired, this condition is true. But during program execution, something may go wrong and may prevent the robot from returning to its start position. This causes a problem if no one is present in the laboratory to intervene and to rectify the situation. The main reason for these failures is a kind of deadlock. The robot is equipped with bumpers both on the front and the rear side (fig. 1). When activated, these bumpers disable movements in the respective direction. This can lead to situations where the robot is no more able to drive forwards or backwards, or even to turn on the spot. Such problems can occur in the corners of the maze, if the robot touches the walls with both the front and the rear bumpers. Thus measures need to be implemented to avoid such situations. In particular, the following sub-problems have to be tackled: Detection of potential collision situations Avoidance of collisions Save return to docking station As mentioned before, movement commands from Saphira are directed to the micro-controller via the PC board on the robot. In the same way, sensor signals are routed back from the sensors to Saphira. The routing in both directions is done by a small program called ipthru (fig. 6). It simply acts as a logical interface between the physical Ethernet and serial interfaces. But at the same time, it is the program where all information on the status of the robot, on the commanded movements, and on the sensor readings is available. Therefore, it is the ideal location for implementing all the measures necessary for un-supervised operation. To make life easy, ipthru is split up into two parts, and the collision check and collision avoidance is put in between these parts (fig. 7). The ipthru/1 receives the original motion commands from Saphira and sends back the sensor readings to Saphira. The ipthru/2 sends modified motion commands to the micro-controller and receives sensor data from the sensors. 404
Saphira Program WLAN Motion commands ipthru Robot PC RS 232 Motion commands ARIA Robot micro-controller Fig. 6 Routing of movement commands and sensor data 4.1 Decomposition of movement commands If movement commands are limited (e.g. move 500 [mm]), it is easy to check if these will lead to collisions with the environment (see 4.2). The situation becomes more complicated, if rotational and longitudinal movements are superimposed. In this case, the area that will be traversed needs to be computed before checking the sensor data. The most complicated case is if movements are not limited. This can occur if a speed only is commanded (e.g. speed 200 [mm/sec]). The robot will move with this speed until the speed will be altered by another command. Therefore, it is not possible to calculate a boundary of the movement. For this reason, a different approach will be implemented. First of all, the current direction of movement will be calculated. Next, a limited movement command is generated which directs the robot in the same direction, but for a well known distance only. For this command, the simple collision check can be executed. If there is no danger of collision, the movement will be done and the next limited movement command will be calculated. This procedure will stop as soon as Saphira sends an altering command. Motion commands ipthru / 1 de-composition into basic movements collision check collision free movements ipthru / 2 Fig. 7 Structure of collision detection and avoidance 4.2 Detection of danger of collision 405
If length and direction of a movement are known, the checker simply needs to ask the laser scanner if there are obstacles within the direction of movement and if these obstacles are in less distance than the length of the movement. For linear movements, the area to be checked is the rectangle in front of the robot with the width of the robot and the length of the intended motion. For circular movements, the checked area is defined by the segment of a circle which is given by the rotation of the robot and the distance to be travelled. This includes zones never passed by the robot, but for simplification of computation these can act as safety margins. The laser sensor scans an angle of 180 in front of the robot. It does not cover parts of the sides and nothing behind the robot. If the robot turns on the spot or drives backwards, it is not possible to scan the whole area of movement by the laser scanner. In this case, the ultrasonic sensors need to be checked. Even if the signals from these sensors by far are not as precise as those from the laser scanner, they can indicate the existence of obstacles. Though the students do not deal with maps of the environment, it can ease the detection of risk of collisions if the checker builds a local map from the sensor signals by applying a SLAM algorithm [4]. Within this map, the path of the robot and the areas crossed by the robot allow more precise calculations where collisions could occur. Thus the safety margins can be chosen much smaller than with the simple methods mentioned above. 4.3 Collision avoidance If there are no obstacles within the predicted area of movement, the command can be executed without modification. In this case, the original or de-composed limited movement command is directed to ipthru/2. If there is a danger of collision, the movement will not be executed (it is not intended to drive around obstacles, but simply to ensure that the bumpers will not be activated). In this case, a speed 0 -command is given to ipthru/2 and the robot is stopped. In general, the user is not aware of the collision avoidance mechanism within the robot. Thus he/she will expect a causal dependency between the original program and the activities of the robot. To avoid confusion on the strange collision avoidance behaviour, this must be signalled to the user. As the only link back from the robot to Saphira is via the sensor signals, these are used to indicate the modification of the program to the user. So, the sensor signals will be modified to display the activation of the bumpers. The user will be informed by this hint that a collision has been prevented and that the robot program has to be changed. 4.4 Save return In the conventional operated lab, the robot is connected to the charging station manually after completion of its task. In case of a remotely operated lab, a standard procedure needs to be run every time a user disconnects from the lab. Within this procedure, the robot needs to identify its location, and to plan and execute a path to the charging station. This procedure can be activated by the host computer which is scheduling the access slots of the students. In addition, battery charge need to be supervised all the time the robot is running. The path to the charging station can be found in the maze environment by following the walls until the charging station is reached. But a better and more reliable solution is the usage of a map. As the environment is very simple, small, and time-invariant, a predefined map is the best solution. This can also be used instead of SLAM for the detection of dangers of collision. 5. Conclusion In this paper, methods and tools are presented to upgrade an existing laboratory for mobile robot programming to a remotely operated lab. The intention is to allow students running the lab from home, relieving them from travelling to the university, without loosing any information they get in the real laboratory. Remote access is quite simple with the programming environment used, as it is TCP/IP based even for the local access. But special attention is necessary to ensure 24/7 availability of the laboratory without human intervention. The work on this conversion is on-going. References [1] A. Bischoff and C. Röhrig, Remote Experimentation in a Collaborative Virtual Environment, 20 th World Conference on Open Learning and Distance Education, ICDE-2001, Düsseldorf, Germany (2001). [2] www.mobilerobots.com [3] U. Borgolte, Reaktive Roboter-Programmierung, FernUniversität in Hagen, internal paper (in German), (2008). [4] J. J. Leonard and H. F. Durrant-Whyte, (1991). Simultaneous map building and localization for an autonomous mobile robot, Proceedings Int. Conference on Intelligent Robots and Systems, IROS'91, Osaka, Japan (1991). 406