MSc in Autonomous Robotics Engineering University of York Practical Robotics Module 2015 A Mobile Robot Navigation System: Labs 1a, 1b, 2a, 2b. Associated lectures: Lecture 1 and lecture 2, given by Nick Pears Lab supervision: Alan Millard and Nick Pears Lab technician: James Hilder Module website: https://sites.google.com/a/york.ac.uk/prar/ Overview This document details work to be conducted over four two-hour lab sessions. The programme of work is as follows: In lab 1a, you calibrate and characterise infra-red (IR) distance sensors - code needs to be developed to process raw analogue voltage readings to give a local position estimate relative to the lab wall. In lab 1b, you calibrate and characterise odometry sensors - again some code development is required, this time to give a global position estimate relative to some robot initial position. In lab 2a, you develop and evaluate a wall-tracking robot system, using the steering control system described in the lecture notes. In lab 2b, you develop and test a more general robot navigation system, by integrating the components developed in labs 1, 2 and 3. Labs 1a and 1b are associated with lecture 1 and labs 2a and 2b are associated with lecture 2. Any additional information required will be available on the 1
module website (URL at the top of this page). To do the required technical work, you will be provided with the items listed below. Please check that they are present and ask for assistance if anything is missing. 1. A Pololu m3pi robot (see Fig. 1) robot with IR distance measurement sensors and optical wheel encoders, with a USB to host (PC) connector. 2. Software (downloadable from the module web page), that allows you to read an analogue voltage off each of four IR sensors and read a pulse count off each wheel encoder. 3. At least two blocks of wood (these are used to help calibrate the IR sensors in lab 1 and form a robot course in lab 2b). 4. A tape measure and ruler. 5. Some A4 sheets with evenly spaced ruled lines on them. 6. A roll of black electrical insulation tape and some scissors. To give some context on the first two labs note that, when developing a practical robot system, we need to: 1. Calibrate sensors: find parameters in sensor models that allow the robot to convert raw sensor readings (analogue voltages or digital pulses) into useful measurements (eg. metric distance measurements). 2. Characterise sensor performance. What is their useful operating range? Do they work in all environments? Are there residual systematic errors after calibration? How large are the random errors and how do these vary under various conditions (this concerns the repeatability of sensor measurements). Does sensor performance drift with time, distance travelled, or other factors, such as operating temperature? Answers to the above questions inform the design of the robot s navigation system. 2
Lab 1a: Calibrating and characterising IR range sensors. In the first lab, you are required to familiarise yourself with the infra-red (IR) distance sensors that have a claimed operating range of 10cm-80cm. The data sheet for these sensors is online and their operation is discussed in the lecture notes. You are provided with a software function, available on the module website, that samples the analogue voltage off each of four IR sensors at f s Hz. For each sensor, the software displays the mean and standard deviation of a captured sample of n ir analogue voltage readings to a terminal across a host USB connection. Figure 1 illustrates the robot and four sensors placed in the designated IR sensor calibration area, where the sensors can see some targets, consisting of the lab wall and a moveable wooden board. You are provided with some printed A4 sheets which may help you to position the robot relative to the targets. 1. You should gather calibration data for the IR sensors, relative to the robots midline, as illustrated. Consider how many calibration points you wish to take and think about a suitable choice of spacing between the calibration points. Also note that there is an offset between the sensor body and the robot s centre line - you need to consider how to deal with this. 2. Develop software function(s) that code a method of IR sensor calibration, enabling distance measurements to the robot centre line to be computed. 3. Develop software function(s) that allow both the distance and orientation of the robot relative to a wall to be computed. Remember that positive angle changes should be in the anticlockwise sense, looking down on the robot. 4. Test the useful operating limits of the IR sensing system, in terms of distance and orientation. You should build in such limits into your software functions, so that that they do not return IR sensor results that are highly unreliable. Time permitting, you may further explore the performance of the IR sensor system. 3
lab wall Xcm Robot origin Xcm IR sensors Moveable wooden block Figure 1: Calibrating the IR sensors 1. Plot the standard deviation of IR distance measurement error against true distance and comment on the shape of the graph. (Hint: a further log-log plot may help you reveal the nature of the relationship of the variables.) 2. Repeat the above experiment, but use a different target colour, such as obtained from a black sheet of A4 paper (provided). 4
Lab 1b: Calibrating and characterising the odometry system (optical shaft encoders). The robot is equipped with a pair of optical wheel encoders that give n pulses per revolution of a robot wheel, taking into account the gear ratio between a drive wheel axle and an encoder shaft. (n and other key data are available on the module website). You are provided with a software function (on module website) that returns the cumulative count from the optical encoders, connected to the left and right drive wheels. 1. The PI robot specifications, give the drive wheel diameter as 32mm. Check this specification with a ruler. 2. Estimate how many encoder pulses equate to a travelling distance of 1 metre - call this count c 1 3. Adapt a robot ground line following program (eg PID or PD controller), such that the robot follows a black straight line until c 1 pulses have been counted and then stop. Use of a moderate maximum robot speed and a ramping up and down of the robot speed will help to give the robot a smoother motion and hence make this odomtery system calibration more accurate. 4. Measure the straight line distance travelled by the robot and the count values from the two wheels. Using an average of the two counts then allows the robot wheel radius estimate to be refined. 5. Compare the wheel radius estimate with that specified and/or measured in (1) above and comment on any discrepancies. Which method do you think is most accurate and why? The above procedure provides one robot odometry parameter (drive wheel radius), the other parameter required is the wheelbase (distance between drive wheels). 1. Directly measure the robot s wheelbase with your ruler and make a note of it. 2. Devise an alternative method of deriving the robot s wheelbase (ask for help if needed) and compare with (1) above. 5
We now have all the parameters required to implement a global position estimation system using odometry. 1. Develop software function(s) that are able to update an estimate of the robot s global position using odometry. The required details are provided in the lecture notes. 2. Run the robot on both linear and circular tracks (using the ground line following function) and comment on the observable global position errors (by tape measurement) associated with the robot s final position. Time permitting, you may attempt the following: 1. Use the robot s wheel encoders to determine the relationship between the robot s demand wheel speed settings and the forward linear speed of the robot in metric units. (In other words, what does 50%, 75%, 100% etc relate to in actual metric speed of the robot and is the relationship linear?) Discuss the results with reference to open loop / closed loop speed control systems. 2. Given a required straight line path, implement code that determines when to start ramping down the robot speed and that ramps the speed down (near) linearly so that the robot stops close to the desired end position. 6
Lab 2a: A steering control system for a robot wall following system In this lab, the aim is to implement a steering control system such that the robot tracks a path that is parallel to a wall. The control system should operate as follows (full details are provided in the lecture notes): 1. Measure the distance and angle relative to the wall using the IR sensors. 2. Generate a distance error by subtracting a predefined demand distance, which corresponds to the distance the robot s path is from the wall. A good choice of demand distance correspond to a distance at which the IR sensor works well, 30cm-40cm would be a good choice. 3. From the distance error, generate a demand heading. 4. From the demand heading, generate a demand turning curvature. 5. For the current robot speed (using the optical wheel encoders), determine demand speeds for the left and right drive wheels, 6. Apply the demand wheel speeds to the robot s drive wheel. 7. Repeat the above on a regular clocked interval; 100mS (10Hz) should be fine for moderate robot speeds, but faster speeds may require 40mS (25Hz) update rates, Once you have a basic system working, try the following (items 3 and 4 can be viewed as time-permitting extras ). 1. Try starting the robot from varying positions and orientations relative to the wall. The robot should move onto its correct path smoothly. 2. Place a small block of wood against the wall and check that the robot adjusts its position accordingly. 3. Vary the control system parameters k d and k θ and comment on the performance of the system. 4. Make the control system parameters dependent on the robot s speed, but limit their maximum values. This should allow the robot to turn more sharply onto its path when travelling at lower velocities. 7
Lab 2b: An integrated robot navigation system In this lab, you may need to finish off outstanding work from labs 1a, 1b and 2a above. If time allows, you should try to bring together the various sensor and control systems developed in previous labs into a single working independent robot navigation system. 1. Run the global odometry system in parallel with the wall following and follow a wall for a fixed distance parallel to that wall (eg 2-3 metres). Ideally the robot should ramp down speed so that it stops in the desired position. 2. Develop a robot system that continually estimates its global position using odometry and periodically corrects this position when it receives valid IR readings from a wall or wooden target. 3. Evaluate your system (over appropriate metrics) using multiple runs on a predefined robot path that consists of straight lines and 90 degree turns - an example robot course is given below. Note that the wooden boards in the course shown in Fig. 2, allow corrections of all global position variables, x, y and θ. For futher discussion: 1. How would you adapt your system so that a path could consist of a concatenation of curved and straight sections, and how would the robot controller be adapted such that it could follow such paths? The academic paper references given in the lecture notes may help you with this. 2. Given that you can estimate the IR distance measurement uncertainty, how could you use this to determine the uncertainty in robot pose estimation, in both local and global frames? 8
finish 00000000000000000000 11111111111111111111 00000000000000000000 11111111111111111111 wooden block lab wall start Figure 2: An example robot course. 9