Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, Fu logic control of a robot manipulator in 3D based on visual servoing Maximiliano Bueno-Lópe and Marco A. Arteaga-Pére Departamento de Control Robótica. División de Ingeniería Eléctrica de la Facultad de Ingeniería. Universidad Nacional Autónoma de México. Apdo. Postal 7 56, México, D. F., 45, México. Tel.: + 5 55 56 3 3. Fax: + 5 55 56 6 73. (e-mail: max@fi-b.unam.mx, marteagp@unam.mx). Abstract: Visual servoing is a useful approach for robot control. It is speciall attractive when the control objective can be stated directl in image coordinates. Fu control is a practical alternative for a variet of challenging control applications since it provides a convenient method for constructing nonlinear controllers via the use of heuristic information, which for instance ma come from an operator who has acted as a human-in-the-loop controller for a process. Fu control strateg offers an alternative approach for man conventional sstems, which has certain advantages over the other techniques. In this work, we proposed a control algorithm for a robot manipulator, which combines fu logic with 3D visual servoing. For implementation onl image coordinates are required. Simulation results show the good performance of the complete sstem. Kewords: Fu Control, Robot Manipulators, Visual Servoing.. INTRODUCTION In most robot control sstems, the control algorithms are computed via the use of position and velocit information obtained b sensors located at each robot link (e.g., encoders, tachometers, etc). However, when the robot is operating in an unstructured environment, such sensor information is not alwas satisfactor. In unstructured environments vision based sstems allowing non-contact measurement of the surroundings, similar to human sense of sight, can be utilied to obtain the position information required b the controller. The visual sensor data can be applied for on-line trajector planning and even for the feedforward/feedback control referred as visual servoing, see Sahin and Zergeroglu 6,HutchinsonandChaumette7,Chaumetteand Hutchinson 6. Visual servoing is an approach to control robot motion using visual feedback signals from a vision sstem. The objective consists in making the manipulator end-effector follow a specified trajector to reach a final point in the workspace according to Taebia and Islam 6, Hsu et al. 6 and Kell et al.. This paper presents the control of a three degrees of freedomrobotarmusingfulogicincombinationwitha vision sstem. The 3D visual information is obtained from the composite inputs of two separate cameras placed in the robot work space. The control law is found b measuringthefeaturesofthetargetextractedfromeachacquired image.thepaperisorganiedasfollows:sectionreviews therelationshipbetweenjointandimagecoordinates.the cameras model is given in Section 3, while the control approach is introduced in Section 4. Simulation results are presented in Section 5. Finall, some conclusions are given in Section 6.. PRELIMINARIES Throughout this paper, we will assume that the robot is a three degrees of freedom manipulator. Consider a threedimensional point (x,,) that is projected onto a screen or an image plane. Such projection is composed onl of two image coordinates (, ). To control movement in 3D is necessar to solve the problem of depth as seen in Liu et al. 6 and Hutchinson and Chaumette 7. In this paper we emplo two cameras, one placed in front of the robot (Camera ) and the other above (Camera ), as shown in Figure. The arrangement has been chosen to avoid calibration. This wa, the first camera gets the motion of the end-effector in the plane x, while the second is emploed to capture the motion of the endeffector along the axis. In doing so, the information regarding the axis is not relevant, but in exchange a right-handed coordinate sstem (,, 3 ) parallel to the base sstem (x,,) can be constructed. Note that this is basicalltheonlinformationavailablewhichrelatesboth coordinate sstems and that the coordinate 3 does own a different origin and scale factor. This image coordinate sstemwillbeusedinsection4todesignavisualservoing approach. Copright b the International Federation of Automatic Control (IFAC) 4578
Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, Fig.. Reference Sstem 3. CAMERAS MODELLING The first step to control design is to describe the relationship between the end-effector position given b x R x T and the image coordinates (, ) of Camera and (, 3 ) of Camera. 3. Camera model From Figure one has x R x o o c + + o r c. () As can be seen, it is possible to define o r c that represents the position of the end-effector relative to an auxiliar reference c parallel to the coordinate sstem of Camera. From () it is rcx o r c x R o o c. () r c x R is the position vector of the robot end-effector, while o o c can be appreciated in Figure. Our approach consists in setting both cameras in such a wa that a right handed image coordinate sstem can be easil formed. to express x x R (3) rcx c (4) r c oc ō c, (5) o c3 x oc c x R ō c. (6) o c3 Based on the previous equations we obtain the following representation of the coordinates in the x plane expressed in Camera, see Arteaga et al. 9 α λ cos(θ ) sin(θ ) o c λ sin(θ ) cos(θ ) x oc U o c3 V x oc o c3 (7) cos(θ ) sin(θ αλ ) sin(θ ) cos(θ ) U, (8) V 4579
Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, where α λ αλ (9) o c λ α is a conversion factor meters- and λ is the focal length. 3. Camera model Reference input Fuification FUZZY CONTROL Inference Mechanism Defuification Torque ROBOT Camera Position Camera The model of Camera is obtained b using the same approach of the Camera. In this case the position vector of the end-effector is given b so that x R x o r c x R o o c + o c rcx r c r c + o r c, () rcx r c. () B defining the following vectors on the x plane x x R () one gets rcx c r c (3) oc ō c, o c (4) x oc c x R ō c. (5) o c In this case the rotation matrix to expresses c with respect to c is cos(θ ) R θ c Rxo o. (6) Then, in Camera one has α λ 3 O c3 λ x oc o c V U α λ cos(θ ) cos(θ ) x oc o c V, U (7) where α λ α λ. (8) o c3 λ B combining (7) and (7) one gets, α λ cos(θ )(x o c )+ α λ sin(θ )( o c3 )+U (9) α λ sin(θ )(x o c )+ α λ cos(θ )( o c3 )+V () 3 α λ ( o c )+U. () B defining cosθ c θ and sinθ s θ, it is given the model Rule-Base End Effector Position Fig.. Fu Control Signal Processing αλ c θ αλ s θ αλ s θ αλ c θ. 3 αλ x oc U. o c o c3 + V U. 4. FUZZY VISUAL CONTROL () In this section, we will use fu control technique to design a fu controller to move an industrial robot to a desired position. We design three fu controllers for each angle, the construction of the fu controller consists in defining the linguistic variable and the fu terms associated with the numerical input and output data (see Sun and Er 4, Bernal et al. 9, Tanaka and Wang ). Through the elements that constitute the knowledge base an inference engine has been defined b the max-product inference rule Michels et al. 6. In fu control sstem the linguistic variables are chosen depending on the sensor input and the action command to send to the sstem (Guerra et al. 9). The inputs are given b the vision sstem and correspond to the image features, and the outputs are the movements of the joint angles of the manipulator. To bound the number of rules (without loss of performance), a significant number of fu sets for each variable has been found. The form of the membership function is chosen triangular. In the inference, we adopt the method of Mamdani (also called the min-max method) to do fu logical reasoning. To determine the crisp output value, we use Center of Area Method (COA). A block diagram of the applied visual servoing configuration is given in Figure. The functions of the main blocks in Figure can be summaried as follows. The actual robot trajectories, X R ; from the robot are observed b Cameras and to form the 3-D composite camera output. This camera output is compared with the desired trajector, the difference of which is fed to the fu controller as the error signal. The output from the fu controller, are applied to the robot controller. 458
Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, Final Point (x f, f, f ) Starting Point ( xi, i, i ) Joint Joint q 3 q Joint q Fig. 4. Fu Sets for position error, axis x q Fig. 3. Description of Robot In the Figure 3 shows a general diagram of fu controller. The camera information gives the position in the planes (x ) and (x ), we designed three fu controllers, one for each angle (θ,θ,θ 3 ). To generate the action control, each fu controller indicates the torque value required to move each angle. 4. Choosing Fu Controller Inputs and Outputs The output variable of the robotic sstem is the position R 3 of the image feature, i. e. the position of the target in the computer screen. To proceed with the design of the controller we define the tracking error d. (3) The task to be accomplished b the robot is to go from its initial position () to a final position f obtained and given through the camera image on the computer screen, respectivel. To create a soft trajector between these two points, we emplo v() : R 3 R 3 given b ẏ d v() k ỹ +ǫỹ k ( d f ), (4) where k, k and ǫ are positive constants and, ỹ f. (5) d is obtained b integrating (4) with d () (). 4. Fu Rules The fu rules have been built in supervised manner simpl b taking into account the input and output linguisticvariablesdefined.allthepossiblestatesthatthe manipulator can come across have been considered, and for each of them the best action is assigned. It has been observedthatthemovementofthefirstjoint(j)modifies (in the image) onl the coordinate of the center of the circumference. The movement of the second and third joints modif onl the radius and the coordinate of the center of the circumference. The fu sets of the input Fig. 5. Fu Sets for Torque linguistic variables are selected more closel in proximit of the goal position, such that the arm can move more precisel. The central linguistic fu set Z (Zero) for the variable has the meaning to be oriented toward the target. This step begins b defining the fu sets for variables input-output. After making several simulations, and then testing the real sstem were obtained fu sets shown in Figures 4 and 5, where linguistic variables are shown in Tables and. The fu sets are the same for each of the input variables. Table. Linguistic Variables for position error Axis x Linguistic Variable NHX NLX ZX PLX PHX Description Negative High Negative Low Zero Positive Low Positive High Table. Linguistic Variables for Torque in each joint Linguistic Variable NH NM NL Z PL PM PH 4.3 Example of design rules Description Negative High Negative Medium Negative Low Zero Positive Low Positive Medium Positive High To illustrate how the rules were designed we present the following example. In Figure 3 there are two points across 458
Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, 7 55 6 5 5 4 3 45 4 35 3 5 8 4 6 8 5 4 6 8 ts Fig. 6. Path in Pixels Camera 7 Fig. 8. ( ) Desired position and (- - -) Actual Position in axes 6 9 5 8 4 7 3 6 5 4 8 4 6 8 3 Fig. 7. Path in Pixels Camera them the robot must move. In this case joint (J) does not move, J moves counterclockwise and J3 moves clockwise. In this case J must make a move less than J, therefore J begins with torque PL and J3 has an initial torque NM. The movement of the end-effector produces that position errors get smaller and therefore the applied torques are lower. 5. SIMULATION RESULTS We have carried out some simulations based on the manipulator A465 of CRS Robotics. It has six degrees of freedom, but we have used onl joints, and 3 to have three degrees of freedom. In order to implement the control law it is necessar to have x, and available, whichisobtainedbcalculatingthecentroidofthesphere that have been attached at the robot end effector. Figures 6 and 7 show the path followed b the end effector in the image coordinates for the Cameras and respectivel. In Figures 8, 9 and actual and desired image coordinates are shownfor eachcoordinate. It can be appreciated that the final position is reached, while the continuous desired path is also well followed. 4 6 8 ts Fig. 9. ( ) Desired position and (- - -) Actual Position in axes 55 5 45 4 35 3 5 5 5 4 6 8 ts Fig.. ( ) Desired position and (- - -) Actual Position in axes 3 The tracking errors are shown in Figures, and 3. 458
Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, 5 5 5 5 4 6 8 ts Fig.. Tracking error 6 4 4 6 8 4 6 8 ts Fig.. Tracking error 3 4 5 6 7 8 4 6 8 ts Fig. 3. Tracking error 3 6. CONCLUSIONS This paper presents a simple control scheme for 3D robot tracking problem in a visual servoing approach. The proposed design requires onl position measurements. This paper has shown how fu logic controller can incorporate the ke components of reactive controllers and formal reasoning on uncertain information of the vision sensor. ACKNOWLEDGEMENTS This work is based on research supported b the CONA- CYT under grant 58 and b the DGAPA UNAM under grant IN548. REFERENCES M. Arteaga, M. Bueno, and A. Espinoa. A simple approach for D visual servoing. In 8th IEEE International Conference on Control Applications. Part of 9 IEEE Multi-conference on Sstems and Control. Saint Petersburg, Russia, Jul 9. Miguel Bernal, Thierr Marie Guerra, and Alexandre Krusewskic. A membership-function-dependent approach for stabilit analsis and controller snthesis of takagi sugeno models. Fu Sets and Sstems, 6: 776 795, Februar 9. F. Chaumette and S. Hutchinson. Visual servo control. Part I: Basic approaches. IEEE Robotics Automation Magaine, 3:8 9, 6. ThierrMarieGuerra,AlexandreKrusewski,andMiguel Bernal. Control law proposition for the stabiliation of discrete takagi sugeno models. IEEE Transactions on Fu Sstems, 7:74 73, June 9. L. Hsu, R. Costa, and F. Liarralde. Lapunov/passivitbased adaptive control of relative degree two mimo sstems with an application to visual servoing. In American Control Conference Minneapolis, Minnesota, USA, June 6. S. Hutchinson and F Chaumette. Visual servo control. Part II: Advanced approaches. IEEE Robotics Automation Magaine, 4:9 8, 7. R. Kell, R. Carelli, O. Nasisi, B. Kuchen, and F. Rees. Stable visual servoing of camera-in-hand robotic sstems. IEEE/ASME Transactions on Mechatronics, 5: 39 48,. C. Liu, C. Cheah, and J.J Slotine. Adaptive Jacobian Tracking control of rigid-link electricall driven robots based on visual task-space information. Automatica, 4:49 5, 6. Kai Michels, Frank Klawonn, Rudolf Kruse, and Andreas Nrnberger. Fu Control. Fundamentals, Stabilit and Design of Fu Controllers. Springer-Verlag Berlin Heidelberg, 6. Türker Sahin and Erkan Zergeroglu. Adaptive 3D visual servo control of robot manipulators via composite camera inputs. Turkish Journal Of Electrical Engineering and Computer Sciences, 4:53 66, 6. Ya Lei Sun and Meng Joo Er. Hbrid fu control of roboticssstems. IEEE Transactions on Fu Sstems, :755 765, December 4. Kauo Tanaka and Hua O. Wang. Fu Control Sstems Design and Analsis: A Linear Matrix Inequalit Approach. John Wile and Sons, Inc.,. A. Taebia and S. Islam. Adaptive iterative learning control for robot manipulators:experimental results. Control Engineering Practice, 4:843 85, 6. 4583