Closed-loop Servoing using Real-time Markerless Arm Tracking
|
|
|
- Georgina Kristin Daniel
- 10 years ago
- Views:
Transcription
1 Closed-loop Servong usng Real-tme Markerless Arm Trackng Matthew Klngensmth, Thomas Galluzzo, Chrstopher Delln, Moslem Kazem, J. Andrew Bagnell, Nancy Pollard {mklngen, tgalluzzo, cdelln, moslemk, dbagnell, andrew.cmu.edu Abstract We present a smple, effcent method of realtme artculated arm pose estmaton usng stochastc gradent descent to correct unmodeled errors n the robot s knematcs wth pont cloud data from commercal depth sensors. We show that our method s robust to error n both the robot s jont encoders and n the extrnsc calbraton of the sensor; and that t s both fast and accurate enough to provde realtme performance for autonomous manpulaton tasks. The effcency of our technque allows us to embed t n a closedloop poston servong strategy; whch we extensvely use to perform manpulaton tasks. Our method s generalzable to any artculated robot, ncludng dexterous humanods and moble manpulators wth multple knematc chans. I. I NTRODUCTION In robotc manpulaton, uncertanty remans a fundamental challenge. Robots struggle wth naccurate or ncomplete data from sensors resultng n both external error (wth respect to objects n the robot s envronment), and nternal error (wth respect to the robot s knematc confguraton). In ndustral settngs, both types of error can be compensated for by alterng the robot s envronment to reduce uncertanty n object pose and by constructng ndustral robot arms whch are hghly rgd and precse. However n uncontrolled settngs, robots must rely on nosy sensors to determne object poses. Further, ndustral robot manpulators are often unsutable for autonomous robotcs n uncontrolled settngs due to ther weght, cost, and extreme rgdty. What results s that autonomous robots n real-world settngs often have much more external and nternal error than ther ndustral counterparts. Snce algorthms n robotc manpulaton (such as grasp planners or moton planners) often assume that the states of the envronment and robot are known wth certanty, sgnfcant error n object pose or robot confguraton may cause autonomous manpulaton tasks to fal. In ths doman, uncertanty s n part accounted for by calbraton. That s, an offlne procedure may be used to estmate unknown or poorly modeled parameters of the system and then store those parameters to correct for errors durng task executon. When the state of the system s defned fully by these parameters, calbratng them results n error reducton. Whle useful, calbraton s lmted to estmatng smple parameters of the system that do not change durng task executon (such as fxed camera parameters, or encoder offsets). However, some parameters may be dffcult to model and estmate. Partularly, n non-rgd robots wth lghtweght seres-elastc or cable-drven motors, the dynamcs of the All authors are assocated wth the Carnege Mellon Robotcs Insttute Pttsburgh, PA (a) Fg. 1. The robot opens a door usng arm pose estmaton. Subfgure (a) shows the currently estmated state of the robot (transparent), and the pose of the robot gven by the jont encoders (sold). The true pose of the arm s closer to the estmated state of the robot from our method than to the state determned from the jont encoders alone. (a) Fg.. Twenty nverse knematcs solutons for a Barret WAM end effector pose (a) are shown n composte. Cable slack and other mechancal mperfecons cause large dfferences n the resultng pose of the end effector. system can be very complex. Motors may slp, metal components may expand and contract wth heat, cables may become untensoned or stretched, and external forces may contort the system n ways whch are dffcult to predct (Fg. ). Due to these factors, the accuracy of calbraton generally decreases over tme, and the parameters wll have to be re-calbrated (an often tedous exercse). For these reasons, t becomes desrable to augment calbraton wth real-tme estmaton of system state by analyzng sensor data. In ths way, we can crcumvent unmodeled system parameters not accounted for durng calbraton by
2 updatng our belef about the state drectly. In ths work, we am to perform common manpulaton tasks by reducng errors between observed sensor data and the robot s knematc model durng the executon of a control loop. Our method s both easy to mplement and extremely effcent; and t obtans much better accuracy than the jont encoders alone. Unlke exstng pose estmaton algorthms n robotc manpulaton, our algorthm s fast enough to run durng a closed-loop servo moton, requres no further data than the robot s jont encoders and a depth mage, and makes no ndependence or Gaussan assumptons about the underlyng statstcal dstrbuton of the error. We show that by usng the same sensor to perceve both the envronment and the robot s arm, we are able to successfully complete tasks autonomously whch would otherwse fal. II. RELATED WORK Trackng of artculated bodes has been well studed and employed n human moton capture research as well as robotc manpulaton tasks. The general approach s to regulate to zero the error between the correspondng features extracted from the sensory nput and the rendered model of the artculated bodes. In ths sense, each of the followng technques are model-based, n that they rely on an explct model of the state. The exstng technques dffer n terms of the nput data and how the error between the measurements and the rendered model s appled to approxmate and correct the models. Here we relate only to some of the most relevant works n human moton capture and robotc manpulaton. In the context of human moton capture, artculated body trackng s employed to estmate the full confguraton of the lmbs (see [8] for an overvew). Commercally avalable marker-based approaches [9] rely on placng reflectve markers or fducal on the artculated bodes, whch lmts ther usage n robotc manpulaton, where t s often not possble to attach such devces to artculated bodes n the scene. To acheve markerless trackng, efforts have been devoted to use a varety of sensory nputs, e.g., -D features, contours, color, and depth data. In a closely related work, Grest et al. [] perform a non-lnear optmzaton to estmate the lmbs degrees of freedom based on depth data correspondence effcently computed usng ICP. We converged to a smlar approach whch uses onlne stochastc gradent descent rather than teratve Gauss-Newton least squares regresson to mnmze the ICP loss functon. Addtonally, n our problem doman, the objectve s not to precsely estmate the artculated bodes degrees of freedom, but to calculate the approprate jont offsets to mnmze the error between all vsble parts of the robot and the measured data. Ths effectvely allows us to reduce the postonng error at the end-effector n a closed-loop servong task, e.g., openng a door. In the context of manpulaton, Krann et al. [7] use arm pose estmaton to model objects held n the robot s hand. In ther work, they make use of the Artculated ICP (AICP) [11] algorthm for pose estmaton, whch reles on LevenbergMarquardt (LM) optmzaton to mnmze the error between the model and the observed sensor data teratvely for each jont. They also ncorporate a Kalman flter nto ther pose estmaton routne. Whle ther method uses the same ICP cost functon as ours, we use smple stochastc gradent descent (as n []) to fnd a mnmum of that cost functon, whch s more effcent. Krann et. al. [7] are able to spend more tme accurately estmatng the robot s pose than we are, snce ther task s to model a statc object held n the robot s hand, wheras our task s to dynamcally estmate the pose of the robot s arm n closed-loop poston servong n order to accomplsh a manpulaton task n real tme. In an ndependent work, Hudson et al. [5] combne dfferent sensory nputs (depth data from stereo cameras and 3D rangng sensors as well as vsual appearance features and slhouettes of the object and manpulator) usng an unscented Kalman Flter framework to leverage the advantage of each sensor and varous features. Our work, n contrast, reles only on a depth sensor and jont encoders, makes no Kalman flter approxmaton, and has no requrement for fducal or vsble slhouettes. Ths allows us to use our algorthm n real-tme durng a control loop, rather than as an offlne procedure used before graspng, n contrast to [5]. Our work s closely related to Poston-Based Vsual Servong (PBVS) [], n whch the end effector pose and/or poses of objects are estmated n the camera frame, and control commands are sent to the robot arm. Our work s unque n that we do not do 3D reconstructon, and rely on depth data drectly wth no further segmentaton nstead. III. THE ALGORITHM We treat the estmaton of robot s confguraton as an optmzaton problem, whch we solve by stochastc gradent descent [3]. In the optmzaton procedure, we attempt to mnmze the sum-squared correspondance error between the closest ponts on the robot s body, and the observed sensor ponts. A. Optmzaton va Stochastc Gradent Descent Assume that the robot s body s made of one or more artculated chans wth N degrees of freedom. Call q R N a confguraton of the robot. We can model the robot as a set of ponts n 3D space, gven ts confguraton. Defne the forward knematcs functon B(q) = {b 1, b, b 3,..., b K }, where b R 3 s the th pont of the robot s body. At the same tme, the robot has a sensor whch produces a pont cloud (such as a depth camera, a stereo camera, or a laser). Call the pont cloud Z = {z 1, z, z 3,..., z P }, where z R 3 s the th pont n the pont cloud. We defne the mnmum dstance operator whch gves the squared dstance to the closest pont n the sensor pont cloud to the body pont B (q) = b. D (q,, Z) = mn z Z B (q) z = mn z Z (z T z ) (1)
3 where z T = B (q) z If we assume that the set B(q) contans only those ponts of the robot s body vsble to the sensor, we can defne the loss functon: L(q) = 1 D (q,, Z) () =1 Whch we call the correspondence error between the robot s model and the observed sensor data. Our am s to mnmze ths functon. One way of mnmzng t s to compute ts gradent wth respect to the robot s confguraton, and descend ths gradent. Now, we can derve the gradent of the loss functon L(q) = q L(q) = q = 1 1 =1 D (q,, Z) =1 q D (q,, Z) (3) Snce t s true n general that for dfferentable functons q mn x f(q, x) = q f(q, x ), where x s the value mnmzng the functon f, (.e a subgradent of the mn functon evaluated at the mnmum equals the gradent of the mn functon) 1, we have q D (q,, Z) = q mn z Z B (q) z = q B (q) = T q z = T z q = ( q = ( ( B (q) )) T q B (q) ) T z = J q T () where J q s the robot s knematc Jacoban evaluated for the pont B (q) at confguraton q. It follows that L(q) = =1 J q T (5) Geometrcally, ths can be nterpreted for a seral lnk robot as a sum of forces, where each force s appled at a body pont of the robot, pushng t away from the nearest pont n the pont cloud. Notce that whle the loss functon s defned over jont confguratons, the quantty we want to mnmze s over 1 For non-dfferentable functons, ths holds for all but a fnte set of non-dfferentable ponts. Moreover, n regons where the loss functon s locally convex, the computaton gves the local subgradent even at nondfferentable ponts. ponts n the workspace. A step along the gradent of L s then the steepest step n confguraton space whch mnmzes a loss functon over body ponts. It would be more desrable to nstead take the steepest step n workspace drectly to mnmze the loss functon. We defne a metrc over changes n confguratons whch consders the resutng change n workspace ponts: q w = w T q w q where w q s the resultng change n workspace of a sngle pont from the change n confguraton space q. Ths s: q w = (J q) T (J q) = q T (J T J) q () where J s the knematc Jacoban of the manpulator evaluated for a partcular workspace pont. We precondton [1] the gradent by consderng a change n confguraton q to be small when q w = q T (J T J) q β (7) where β s a small value. By dong ths, we transform the space of the loss functon nto one where jont values are scaled to correspond to ther resultng workspace moton [1]. Ths leads to a new precondtoned gradent computaton: L(q) = =1 J q (8) where J q = (J T J ) 1 J T s the regularzd Jacoban pseudo-nverse for the body pont b evaulated at the confguraton q. In practce, usng the pseudo-nverse rather than the transpose led to faster and more stable convergence. Then, we have an optmzaton procedure: q t1 = q t λ L(q t ) (9) where L(q) = q L(q), and λ R s a step sze, and qt s the robot s best estmate of ts poston at tme t. If the robot ddn t have access to jont encoders, the estmate q t could be used drectly as a belef about the robot s poston. However, the jont encoders provde a very accurate estmate of the robot s state n themselves. We can ncorporate the robot s jont encoders nto the estmaton by modellng the error of the system as a random (confguratondependent) offset from the reported jont angles;.e q = q δ t (1) where q s the true (unknown) state of the robot, and δ t s a random (unknown) offset from the jont encoders q, at tme t. Then, we keep a runnng estmate of δ (ntally zero): δ t1 = δ t λ L(q δ t ) (11)
4 (a) (c) Fg. 3. Renderng phases for pose estmaton. (a) the RGB mage from the commercal depth camera. the depth mage from the sensor. (c) the 3D model of the robot, as rendered from the perspectve of the depth camera. Each lnk s colored dfferently to provde a smple mappng from rendered pont to lnk on the robot. (d) the fnal composte pont cloud rendered n 3D, wth the arm segmented out, and a random subset of model ponts selected. (d) are vsble to the sensor by renderng a smulated depth mage. ) Fndng correspondng ponts between Z b and B We do ths effcently by storng the sensor data n an octree. In the process of searchng for the nearest pont, we reject ponts beyond a small threshold. By dong ths, the algorthm wll sometmes fal to fnd correspondences between certan ponts. If ths s the case, we smply gnore ponts for whch no correspondence could be found. 3) Calculatng for each pont J qδ Frst, we color each rendered pont n B so that the color of each pont corresponds to the lnk t s attached to. From there, we use the method descrbed n [1] to effcently fnd the Knematc Jacoban for each randomly subsampled pont. We then compute the pseudo-nverse. C. Closed Loop Poston Servong Now that we have a suffcently fast pose estmaton algorthm, we can use t nsde a control loop to move ponts on the robot s body to a desred poston n space. Suppose that we have a frame of nterest somewhere on the robot s body (e.g, the end effector frame) called x. Then, f we want to move ths frame toward some desred locaton x d, we can perform a lnear Jacoban pseudo-nverse servo moton nsde a control loop to acheve ths, followng the general equaton [1]: q = J e ẋ d λ(i J e J e ) H(q), (1) Fg.. Tmng data for one teraton of the algorthm wth 5 randomly subsampled ponts. Dfferent phases of the algorthm are tmed. Other phases not shown contrbuted neglble tme. Acqurng sensor data and constructng the octree happen only once per sensor update (whch n our case s 3 Hz). Note that there mght be many ponts n both Z and B at any gven tme, makng the evaluaton of L computatonally nfeasble. So nstead, we choose a random subset of m = mn(k, M) ponts from B, and evaluate the gradent for those ponts only. In ths sense, our gradent descent procedure becomes a stochastc one. Addtonally, we normalze the gradent by multplyng t by 1 m, snce the gradent s naturally scaled by the number of observed ponts. The result s that the offset δ converges to a local mnmum of the loss functon, so that the robot s model best descrbes the sensor data. B. Performance Consderatons The algorthm has three major components whch comprse a majorty of the computaton tme. 1) Generatng B We model the robot as a trangular mesh, and explot graphcs hardware to extract a seres of ponts whch where J e s the robot Jacoban wth ts pseudo-nverse denoted J e, λ s a gan value, and H(q) s a null-space cost/utlty functon. Dfferent crtera can be used to defne H(q) dependng on the objectve, e.g., avodng jont lmts or knematc sngulartes, and ẋ d s a velocty toward the desred pose. However, snce we are not confdent about where x s exactly, we nstead have some current estmate for x, called ˆx, whch comes from the pose estmaton procedure. ˆx arrves asynchronously as the sensor provdes new depth data. Ths s stored n a buffer, whch the nner control loop reads. If the speed at whch we move s slow enough, the control loop remans stable, and the robot acheves the desred locaton for ˆx. The detals of ths control loop are lad out n (Fg. 5). A. Servong to a Pont IV. EXPERIMENTS We mplement our algorthm n C on the DARPA ARM- S Robot [], whch has two Barret WAM arms, and an Asus Xton Pro commercal depth sensor. Our prmary computer s a Dell Precson T75 wth 8 GB of RAM, an 8-core Intel 7 processor, and an Nvda Quadro FX 8 GPU. Our frst test s to repeatedly touch a 1.5 cm radus red dot wth one of the robot s fngers. To do ths, we frst perceve the red dot usng the depth sensor and color segmentaton;
5 Z GPU 3D pont renderng Arm model 8 B ICP cost functon L(δt) 15 Hz qe x J e q No Pose Estmaton Pose Estmaton 5 xd 7 δt1 c x J Robotc arm Y Error (cm) 3D pont cloud RGB-D data 3 3 Hz 1 Fg. 5. Block dagram of the pose estmaton beng used nsde a control loop. There are two blocks n the control loop. The top block corresponds to the pose estmaton loop, whch occurs whenever a new sensor update Z s receved from the RGB-D sensor. The bottom block corresponds to the servong loop, whch moves the arm teratvely untl a desred pose xd s reached for a partcular pont attached to the robot s body. The servong loop reads the confguraton offset from a buffer that the sensor loop wrtes nto. 1 X Error (cm) Fg. 7. Ffteen touches of the red crcle for each method are dsplayed, the error s shown n centmeters. The red crcle s approxmately 1.5 cm n radus, and the robot s attemptng to touch the center of the crcle (whch s estmated by a vson system usng color segmentaton). The standard devaton of the error wthout pose estmaton s [3., 1.7] cm n x and y respectvely, and wth pose estmaton t s [.8,.] cm. No Pose Esmaton 1 Pose Estmaton (a) Fg.. The robot touches a red crcle approxmately 1cm n radus by servong usng a Jacoban pseudo-nverse move such that the fnger lands on the center of the red crcle. Typcal cases wth (a) and wthout pose estmaton are shown. Ponts on the paper where the fnger touched were marked by hand. Each touch used a randomly selected nverse knematcs soluton whch puts the fnger 1 cm above the red dot as the startng confguraton. then, we plan to a pose 1 cm above the red dot by selectng a random nverse knematcs soluton whch places the fnger just above the red dot. Fnally, we execute a Jacoban pseudonverse moton n a control loop whch teratvely moves the fnger of the robot toward the red dot untl t touches. By ncludng pose estmaton n the control loop, we are able to teratvely correct error as the robot moves toward ts target. Each touch s recorded by hand by markng the paper where the robot touches. We do ths both wth and wthout arm pose estmaton (Fg. ). Our results (Fg. 7) show that wth arm pose estmaton, we are able to reduce fnal pose error so that all touches fall wthn the red dot. Wthout arm pose estmaton, there s sgnfcant systematc error n the fnal pose of the arm, and the standard devaton of the touches s much larger. B. Robustness to Exstrnsc Sensor Calbraton Error Because t s smply correctng the current error between the knematc model of the robot and sensor data, our algorthm s robust to small calbraton errors wth respect to the sensor. To confrm ths, we smulated a small error n the extrnsc calbraton of the sensor by corruptng the jont encoders of the robot s pan-tlt neck mechansm by an artfcal offset (from -9 to 9 degrees). We agan performed Error Norm (cm) Neck Pan Offset (degrees) 8 Fg. 8. We performed the dot-touchng test agan, but wth dfferent amounts of artfcal error added to the pan jont of the robot s pan-tlt head (on whch the RGB-D sensor sts). The norm of the error wth and wthout pose estmaton s shown. the dot-touchng test, but wth dfferent amounts of neck error. The data (Fg. 8) show that, wthout arm pose estmaton, the error of the fnal touch locaton dverges dramatcally as calbraton error ncreases. In contrast, wth pose estmaton, the error ncreases much more slowly. It just so happened that for negatve encoder offsets to the pan jont of the neck, the elbow of the robot s arm occluded ts hand makng pose estmaton dffcult. Whle for postve offsets, the hand was made more vsble, and the pose estmaton converged relatvely faster. The lowest error for both cases s when the artfcal neck error s near zero. Ths result hghlghts the need for vsblty plannng n addton to pose estmaton, as the estmator wll converge more slowly when less of the robot s arm s vsble. C. Openng a Door Our next experment shows the real-tme arm pose estmaton algorthm s usage n everyday scenaros. For Phase I
6 V. D ISCUSSION By optmzng the jont offset δ nsde a closed control loop, we are able to teratvely correct for confguratonspecfc errors durng the executon of guarded moves by the robot. However, t s not clear that our algorthm s actually estmatng the state of the robot. Rather, snce our loss functon s defned over the correspondence error between the robot model and the observed sensor data, we are only ndrectly fndng jont offsets whch happen to explan away perceved error. Even so, we are able to perform realworld manpulaton tasks usng our method. We can frame the goal of manpulaton tasks n terms of the geometrc confguraton of bodes n workspace. The goal of a graspng task, for example, s to move the robot s arm to a confguraton where ts end effector s poston n the workspace corresponds to the physcal poston of the object we wsh the robot to grasp. Ths s complcated by the fact that the robot only has access to data t can perceve through ts sensors. Dfferent sensors have dfferent systemc errors so f the percepton system of the robot s used to estmate the physcal workspace poston of the object to be grasped, whle the jont encoders are used to estmate the physcal workspace poston of the robot s body, the robot s bound to fal the graspng task because the errors between sensors are not equvalent. Our method mtgates ths ssue by transformng the data from the jont encoders mplctly nto the data from the depth sensor. It smply adjusts the estmated confguraton of the robot so that the ponts on the body of the robot correspond to the (possbly ncorrect or nosy) ponts observed by the sensor. The result s that the actual ponts of the Youtube vdeo of the door openng experment.18.1 Average Correspondance Error (m) of the DARPA ARM-S Project [], one of the tasks was to autonomously open a small door. Due to error n the perceved locaton of the handle, and error n the robot s jont encoders, smply percevng and graspng the door handle s a challenge. To solve ths problem, we ntally used guarded moves and force sensng [] to blndly move the hand nto poston to grasp the door handle. However, ths approach was tme consumng and prone to falure. Rather than usng guarded moves, we now nstead contnuously estmate the pose of the robot s arm wth the depth camera as the robot servos toward a desred pose just above the door handle. We then grasp the handle, and open the door (Fg. 1). We compared ths behavor of the robot when t s able to use pose estmaton, and when t smply blndly servos to a pose just above the doorhandle. We found that wthout usng pose estmaton, the robot fals 1/1 trals to grasp the door handle. In two of the trals, the robot mssed the door handle completely. However, usng pose estmaton, the robot succeeds n 9/1 trals, wth the one falure case occurng due to vsblty lmtatons of the chosen arm confguraton. A vdeo of ths experment can be found here..1 Fnger touches I Fg. 9. The average error at each teraton between closest ponts n the knematc model and the observed depth data s recorded durng the dot touchng experment. The frst spke represents the ntal error, whch quckly converges. The second spke (around teraton 8), corresponds to the pont at whch the robot touched the surface of the table (ths causes stran on the jonts, and causes them to move). body and the object to be grasped lne up; and the robot successfully grasps the object. Future work mght ncorporate dsturbance detecton (Fg. 9).e, usng pose estmaton to determne when the arm has collded wth obstacles or has successfully grasped an object. Another concern s our method s robustness to outlers and other dsturbances. It may be worthwhle to nclude a statstcal flter, such as a Kalman Flter, to explctly model the uncertanty of the arm s pose. We may also wsh to speed up the procedure even further by ncorporatng massvely parallel computng. Many of the steps n our procedure are trvally parallel. ACKNOWLEDGMENTS The authors gratefully acknowledge fundng under the DARPA Autonomous Robotc Manpulaton Software Track (ARM-S) program. R EFERENCES [1] S. Amar and S.C. Douglas. Why natural gradent? In Acoustcs, Speech and Sgnal Processng, Proceedngs of the 1998 IEEE Internatonal Conference on, volume, pages vol., [] J. Andrew (Drew) Bagnell, Felpe Cavalcant, Le Cu, Thomas Galluzzo, Martal Hebert, Moslem Kazem, Matthew Klngensmth, Jacquelne Lbby, Tan Yu Lu, Nancy Pollard, Mkhal Pvtorako, Jean-Sebasten Valos, and Ranq Zhu. An ntegrated system for autonomous robotcs manpulaton. In IEEE/RSJ Internatonal Conference on Intellgent Robots and Systems, pages 955 9, October 1. [3] Le on Bottou. Large-scale machne learnng wth stochastc gradent descent. In Yves Lechevaller and Glbert Saporta, edtors, Proceedngs of the 19th Internatonal Conference on Computatonal Statstcs (COMPSTAT 1), pages , Pars, France, August 1. Sprnger. [] Danel Grest, Jan Woetzel, and Renhard Koch. Nonlnear Body Pose Estmaton from Depth Images. Pattern Recognton, pages 85 9, 5. [5] Paul Hebert, Ncolas Hudson, Jeremy Ma, Thomas Howard, Thomas Fuchs, Max Bajracharya, and Joel W. Burdck. Combned shape, appearance and slhouette for smultaneous manpulator and object trackng. In ICRA, pages 5 1, 1. [] S. Hutchnson, G.D. Hager, and P.I. Corke. A tutoral on vsual servo control. IEEE Transactons on Robotcs and Automaton, 1(5):51 7, 199.
7 [7] Mchael Krann, Peter Henry, Xaofeng Ren, and Deter Fox. Manpulator and object trackng for n-hand 3d object modelng. Int. J. Rob. Res., 3(11): , September 11. [8] Thomas B. Moeslund and Erk Granum. A survey of computer vsonbased human moton capture. Comput. Vs. Image Underst., 81(3):31 8, 1. [9] Thomas B. Moeslund, Adran Hlton, and Volker Krüger. A survey of advances n vson-based human moton capture and analyss. Comput. Vs. Image Underst., 1():9 1,. [1] Jorge Nocedal and Stephen J. Wrght. Numercal Optmzaton. Sprnger,. [11] Stefano Pellegrn, Konrad Schndler, and Danele Nard. A generalsaton of the cp algorthm for artculated bodes. In BMVC, 8. [1] L. Scavcco and B. Sclano. Modellng and Control of Robot Manpulators. Advanced Textbooks n Control and Sgnal Processng Seres. Sprnger-Verlag,.
