Point-cloud-based Model-mediated Teleoperation

Size: px
Start display at page:

Download "Point-cloud-based Model-mediated Teleoperation"

Transcription

1 Point-cloud-based Model-mediated Teleoperation Xiao Xu Institute for Media Technolog Technische Universität München Burak Cimeci Institute for Media Technolog Technische Universität München Eckehard Steinbach Institute for Media Technolog Technische Universität München Abstract In this paper, we etend the concept of modelmediated teleoperation (MMT) to si degrees-of-freedom in comple environments using a time-of-flight (ToF) camera. Compared to the original MMT method, the remote environment is no longer approimated b a simple planar surface, but b a point cloud model. Thus, object surfaces with comple geometr can be used in MMT. In our proposed sstem, the point cloud model is captured b the ToF camera with high temporal resolution (up to 160fps) and a fleible work range (10cm to 5m). Updating the model of the remote environment while the robot is in operation is thus easier compared to the original MMT approach. The point cloud model is transmitted from the teleoperator to the operator using a lossless H.264 codec. In addition, a simple pointcloud-based haptic rendering algorithm is adopted to generate the force feedback signal directl from the point cloud model without first converting it into polgons. Moreover, to compensate for the estimation error of the point cloud model, adaptive position and force control schemes are applied to enable stable and transparent teleoperation. Our eperiments demonstrate the feasibilit and benefits of utiliing the proposed method in MMT. I. INTRODUCTION A tpical teleoperation sstem consists of a master/operator sstem, a slave/teleoperator sstem and a communication link in between [1]. The slave sstem is controlled b position/velocit commands generated b the user s operation on the master sstem, while the haptic information sensed b the slave sstem is returned to and displaed on the master sstem. The user can thus remotel interact with the environment on the slave side. For teleoperation with geographicall separated master and slave sstems, communication dela is unavoidable. Even a small time dela in the haptic channel jeopardies the sstem stabilit and performance [2]. Several control architectures have been developed to enable stable teleoperation in the presence of communication delas. The classical control schemes, however, result in either poor transparenc or poor stabilit properties [3], [4]. In [5], [6], the so-called predictive displa method is developed to address both issues. For this method, a computer graphics (CG) model of the robot arm is overlaed on the real video images, which enables the user to locall view the motion of the slave robot before it actuall moves and hence avoid possible collisions. An etension work of the predictive displa for environment modeling is implemented in [7], where a stereo camera is emploed to capture the remote environment with a pre-scan procedure, and the 3D virtual environment (VE) can be reconstructed accordingl with polgons or meshes. After that, a model of the telerobot is placed in the VE and the user can thus locall interact with the VE without dela. Although the predictive displa method shows advantages compared to the classical control methods, the construction of the VE model is time consuming. In addition, the updating of the environment is not online and becomes even impossible while the robot is in operation. Moreover, the reconstructed virtual environment is tpicall etensive while most areas of it are of no concern to the users during the operation. Different from the methods above, the concept of Modelmediated teleoperation (MMT) is proposed in [8], [9]. In the MMT method, the user is onl concerned with the object and its model that the slave is interacting with. A simple object model (e.g. a plane) is computed based on the position and force signals on the slave side and transmitted back to the master side. The haptic feedback is generated locall based on the received object model. Thus, a stable and transparent teleoperation sstem is guaranteed. During operation, the object model is updated and transmitted back to the master side whenever the slave obtains a new model. Therefore, the pre-scanning of the remote environment is no longer necessar and the estimated object model is adaptivel updated according to changes in the environment. The main challenge for MMT is to build a geometric model of the object in the remote environment and to estimate its phsical properties (impedance). In [10], a damper-spring model is emploed to approimate the environment. In [11], a distance sensor is used to predict the position of a planar surface even before the slave is in contact with it. Both estimation approaches, however, work onl for one degree-of-freedom (DoF) sstem. A 6-DoF estimation method is proposed in [12], [13], et the communication dela and the surface friction are ignored. An approach for estimating the object model in multi-dof with more phsical properties (such as friction coefficient) in the presence of communication delas is proposed in [9], where a 2D planar surface model is etracted from a point cloud that is captured b a stereo camera. The current work on MMT can onl estimate the environment with a rigid planar surface in one of two dimensions. However, in most cases the environment is not a simple plane. A planar approimation of the remote environment leads to large deviations from the real environment and thus results in frequent model updates and incorrect haptic rendering, which degrades the sstem transparenc and even jeopardies the sstem stabilit [14]. Therefore, etending MMT for objects with comple geometr and phsical properties is necessar. In this paper, we propose a point-cloud-based MMT sstem, which works in 3D space with comple environments. Different from previous MMT, the remote environment is neither a simple planar surface nor a simple geometric shape

2 but rather a point cloud model, which can represent an object surface with arbitrar geometric properties. In our work, a time-of-flight (ToF) camera is emploed to capture depth images of the object surface. Due to the high frame rate and fleible work range of the camera, the point cloud model can be obtained ver quickl and the online updating of the environment model is possible while the robot is in operation. In order to get precise point cloud data in 3D, pre-filtering and camera calibration techniques are emploed. The transmission of the point cloud model is based on a H.264 codec running in lossless intra mode, since an information loss in the point cloud model results in force-feedback rendering errors. Moreover, a simple point-cloud-based rendering algorithm [16] is adopted to generate haptic signals on the master side. Finall, a combination of the position and force control methods compensates for the estimation error of the model position. The rest of the paper is organied as follows. Sec. 2 eplains the proposed point-cloud-based approach for modelmediated teleoperation. Sec. 3 shows the results of the eperimental evaluation and discusses potential etensions of our sstem. Sec. 4 concludes this paper and outlines future work. II. POINT-CLOUD-BASED MODEL-MEDIATED TELEOPERATION Fig. 1 shows the idea of MMT, which uses the sensor information on the slave side (position, force/torque, etc.) to build a virtual model of the remote environment, including geometric and phsical properties (model parameters). The model parameters are transmitted to the master side where a local virtual model is reconstructed accordingl. While the user interacts with the remote environment, the haptic feedback is generated locall without an dela based on this virtual model. If the model parameters are perfectl estimated b the sensors on the slave side, the teleoperation sstem can be thus both stable and transparent for arbitrar communication dela. Generall, the main challenges of MMT lie in two aspects: to obtain a precise object model even for comple geometr and to estimate the corresponding phsical properties. In the following, we address the first challenge b developing a pointcloud-based MMT sstem which does not require pre-scanning of the remote environment. The 3D point cloud model is built with the help of a ToF camera (Argos R 3D-P100), which has high frame rate (up to 160fps) and a more fleible work range (10cm to 5m) compared to other 3D cameras, such as Microsoft Kinect and ASUS Xtion (about 50cm to 3m). Operator /Master Force Position /Force Local Model Model Parameters Network Local Model Force Sensor data Position /Force Teleoperator / Fig. 1: Overview of a MMT sstem (adopted from [8]). Master sstem Pm Pm / fm Point cloud reconstruction fm Force Rendering Coordinate Transf. Decoder network P'm / f'm Point cloud modeling Encoder Ps / fs Coordinate Transf. Prefilters Depth images sstem Fig. 2: Overview of the point-cloud-based MMT sstem, where p m, f m, p s and f s are the master position, master force, slave position and slave force, respectivel. A. Sstem overview An overview of the proposed sstem is shown in Fig. 2. The depth images captured b the ToF camera and the slave position and force signals are necessar for estimating the point cloud model. Once the point cloud model is obtained, it is transmitted to the master side. As the data sie of the point cloud model is large, a compression scheme (codec) is emploed to reduce the data sie. On the master side, the reverse processes are implemented and the 3D point cloud model is reconstructed accordingl. Thus, the force-feedback signals can be generated locall based on the point cloud model. B. Pre-filtering block The raw point cloud data (depth images) captured b the 3D camera are normall quite nois, sometimes even with holes due to an invalid work range or wrong reflection (see Fig. 3). Therefore, the raw point cloud data need to be filtered. In this paper, in order to reduce the computational compleit for the online modeling, simple standard filters are emploed. Firstl, a 5 b 5 median filter is applied on each depth image to remove invalid points. Then a temporal average filter for ever 25 frames is emploed to reduce the noise of the depth image. In addition, a fast image inpainting algorithm as described in [15] is applied to fill holes in the depth image. In this hole-filling algorithm, we regard the depth image as a grascale image. Firstl, the hole regions in the depth image are etracted and marked. Afterwards, based on their neighborhoods a isotropic diffusion (convolution with matrices A and B) is applied inside the hole regions for several rounds. The diffusion kernels suggested b [15] are as follows: A = ( a b a b 0 b a b a ) and B = ( c c c c 0 c c c c where a = , b = and c = After filtering, a low-noise depth image is obtained without holes (see Fig. 3). C. Coordinate transformation In order to build the 3D point cloud model in world coordinates from the depth images, a coordinate transformation (back projection) technique is emploed. As illustrated in Fig. 4, three coordinate transformation steps are necessar: 1) from image coordinates to camera coordinates, 2) from camera )

3 holes v u Object in piel coordinates (u v) Object in camera coordinates (c c c) O O c focal length f c Image plane Fig. 3: A depth image before filtering (left) and after filtering (right). The holes are filled b the median, average and inpainting filters. coordinates to robot tool coordinates (R 1 and t 1 ) and 3) from robot tool coordinates to world coordinates (R 2 and t 2 ). In the first step, ever point in the depth image can be described as (u,v,) T, where u and v are the piel coordinates in rows and columns and is the distance value. The purpose of this step is to transform (u,v,) T to camera coordinates that are described b ( c, c, c ) T. We assume that the camera view can be modeled b an ideal pinhole camera model which projects 3D points onto a 2D image plane (Fig. 5). Therefore, the transformation from image coordinates (u,v,) T to camera coordinates ( c, c, c ) T is as follows: c = (o v) c / f, c = (u o ) c / f, c = where f and f are the camera focal lengths in and directions, o and o are the piel shifts from the camera center. For the second and third steps, the transformation from ( c, c, c ) T to robot tool coordinates ( t, t, t ) T and then to world coordinates ( w, w, w ) T requires both rotation and translation: ( w, w, w ) T = R 2 ( t, t, t ) T + t 2 = R 2 R 1 ( c, c, c ) T + (R 2 t 1 + t 2 ) So far the 3D object point cloud model in the world coordinate sstem is obtained. The net step is to estimate the model properties and render haptic signals based on the point cloud model. Ow R2, t2 Ot Oc R1, t1 3D camera Fig. 4: Overview of the coordinate transformation in the proposed sstem. Fig. 5: Back-projection from image to camera coordinates. D. Modeling block In this paper, we consider the object as a static rigid bod without friction. Therefore, the onl model parameter that needs to be estimated is the geometr of the object, which is represented b the 3D point cloud. As we assume rigid objects, the stiffness is set to be the maimum value that the master device can displa. E. Force rendering The function of this block is to render haptic signals directl based on the 3D point cloud. A similar method as [16] with a fast plane detection algorithm is emploed. As illustrated in Fig. 6, the haptic interaction point (HIP) and pro are used to detect collision and render the force. If the HIP is outside of the estimated surface, the pro follows the motion of the HIP. If there are an points within r 1, the pro is entrenched and it is moved one step in the direction of n. If there are an points between r 1 and r 2 and the HIP is inside the estimated surface, the pro is considered to be in contact with the object. Thus, the motion of the pro is constrained on the estimated surface (in the direction of v), and a temporar plane model is estimated using all the points p i = ( i, i, i ) T,i I between r 1 and r 2. Similar to [17], the plane center is set to be the average position of all the points and the normal vector can be obtained b the eigen-analsis of the covariance matri C R 3 3 for all p i, where the matri C is calculated as: C = 1 I I i=1,i I p i p T i Assume that λ 1,λ 2,λ 3 are the 3 eigenvalues of the covariance matri C and v 1,v 2,v 3 are the corresponding eigenvectors. Thus, the plane normal n is obtained b the eigenvector which corresponds to the minimum eigenvalue: n = v k, k = arg min k {1,2,3} {λ 1,λ 2,λ 3 } Once the temporar plane normal is obtained, the haptic signal can be rendered at 1kH with a simple spring model based on Hooke s law. F. Codec According to the MMT method, the estimated point cloud model is transmitted to the master side. In our sstem, rather than transmitting the 3D point cloud data directl, we transmit the coordinate rotation and translation parameters along with the filtered depth image. On the master side the same

4 r2 r1 pro HIP f Normal vector n Fig. 6: The definition of the pro (left) and the estimation of the surface normal (right). v holes ToF Camera Omega.6 Ow trajector (a) JR3 Force Sensor trajector Fig. 8: (a) Eperimental setup and (b) the reconstructed 3D point cloud model for the steel semispherical shell. (b) p A p s0 Real object position Estimated object position (a) p A Master Δ p s0 Δ p m Position after shift v s Real object position Master Position after shift (b) v m Estimated object position Δ ' Fig. 7: Model shifts for the error cases of the position estimation. (a) The slave is in contact with the environment before the master, thus the estimated object model is shifted along. While the master is leaving the object model, the model is shifted along. (b) The master is in contact with the object model before the slave, the model is thus shifted b a large displacement, which changes the situation to case 1. coordinate transformation is applied and the 3D point cloud model can be thus reconstructed. Although the sie of the depth image is much smaller than the 3D point cloud, a compression scheme is needed to further reduce the data sie. As alread small compression errors result in large deviation of the reconstructed 3D point cloud model on the master side, a lossless compression scheme is required. In our work, we consider the depth image as a grascale image and a lossless H.264 compression scheme running in intra mode (with GOP structure of onl I frames) is emploed to achieve this aim. G. Model update While the robot is in free space, it is controlled using position control. The point cloud model is updated and transmitted to the master side twice ever second (2H). Since the object in the remote environment is supposed to be static, a higher update rate is not necessar. Once the contact occurs on the slave side in one or more coordinate directions: fs > f thres and/or fs > f thres and/or fs > f thres ( f s {,,} > f thres ), the robot switches to force control in the corresponding direction. To enable a stable teleoperation without an model-jump effect [14], the updating of the point cloud model is thus stopped. However, due to the estimation error of the 3D point cloud, there will be small position differences between the real object and the estimated point cloud model, which results in the following three cases: 1) The slave is in contact with the object ( f {,,} s > f thres ) while the master HIP is still in free space ( f m 0); 2) The slave is in free space ( f s {,,} > f thres ) while the contact occurs on the master side ( f m > 0); 3) The slave is contact with the object ( f s {,,} > f thres ) at the same time instant when the contact occurs on the master side ( f m > 0), but the position estimation of the object model is still wrong. For the cases 1 and 3, the estimated point cloud model is displaced to compensate for the estimation errors (Fig. 7). The displacement vector is computed as follows According to the invariant relative position between the slave end effector and the ToF camera, the position of the slave end effector in the camera coordinates is fied. Thus, the real contact position p s0 (the position of the slave end effector) and the estimated contact position p A in the point cloud model can be computed; If the position of p A is identical to the current slave end effector position p s0, the model is correctl estimated. Otherwise, p A and p s0 are transmitted back to the master side and the displacement vector is computed as = p m p A, where p m is the master HIP position; While the master HIP is leaving the object model, the model is shifted along the vector = p s0 p m, with the restriction that the model surface is alwas just below the master HIP until it reaches the correct position p s0. For case 2, the point cloud model is shifted along the direction of the current master velocit b a large displacement in order to dela the haptic contact on the master side (Fig. 7(b)), which changes the situation to case 1 and hence the procedures described above are applied. III. EXPERIMENTAL RESULTS In this section, an eperiment is conducted to evaluate the performance of our proposed sstem in a real teleoperation sstem with a comple remote environment. A. Setup The sstem setup is shown in Fig. 8(a). A Force Dimension R Omega.6 and a KUKA LWR arm are used as the master and slave devices, respectivel. A JR3 force sensor is equipped on the slave robot to measure the slave force f s. The Argos R 3D-P100 ToF camera is used to capture the depth images. The software environment is based on ROS ( and the SDK of Force Dimension.

5 (a) (b) (c) (d) (e) (f) (g) Fig. 9: Eperimental results. (a)-(c) the master and slave position in, and directions, respectivel. (d)-(f) the master and slave force in,, and directions, respectivel. (g) the data rate vs. time. B. Eperiemental design To test the sstem performance, a semispherical steel shell is place on the slave side as a comple rigid object. A smooth paper tape is pasted on the shell to reduce the friction (see Fig. 8(a)). The estimated point cloud model and the trajector of the master HIP are illustrated in Fig. 8(b). The frame rate of the ToF camera is set to 50fps. Due to the 2H update rate of the point cloud model, a temporal average of ever 25 frames is computed as the depth image inputs of our sstem. In addition, the rotation and translation for the coordinate transformation (Fig. 4) are obtained b the camera calibration (R 1,t 1 ) and the robot status in 3D space (R 2,t 2 ). The gap between the pro radius r 1 and r 2 in Fig. 6 is chosen to be 5mm, which is just larger than the noise level of the point cloud captured b the ToF camera. The force threshold f thres is set to be 0.5N b considering the noise of the JR3 sensor. During the eperiment, the forward and backward communication delas are set to be a fied value T f = T b = T d =500ms. C. Eperiemental results Figs. 9(a)-(f) show the position and force signals on both master and slave sides. From 0s to about 3.5s (point A to B in Fig. 9(f)), both slave and master HIP are in free space. Thus, the master force f m is ero and the slave force f s is nearl ero (due to the measuring noise of the JR3 force sensor). At the time instant of about 3.5s (point B), the robot is in contact with the object and switches to the force control mode. If the object model is correctl estimated, the contact on the master side should occur at about 3s (T d = 500ms). However, the estimated model position has a small error in -direction, which can be observed in Fig. 9(c) for the point a (the contact position on the slave side) and the point b (the contact position on the master side). Therefore, the master is still in the free space until about 3.4s (point b in Fig. 9(c)) while the slave is keeping in contact with the environment and waiting for the command sent b the master (point B to C). From about 3.5s to 7s, the master HIP is in contact with the estimated object model and moves on the model surface according to the trajector designed in Fig. 8(b), and the slave follows the master s motion/force with a dela of 500ms (point C to D). From point D to point E, the slave senses a force impulse due to a small pressure on the master side, which can be observed in the master position signals in and directions between about 7s to 7.5s (Fig. 9(b)(c)). After about 8.5s (point E), the slave starts to leave the object and thus the measured slave force reduces to ero (with small noise). During the contact, the mean force errors in, and direction are 0.33N, 0.43N and 0.28N with the standard deviation of 0.14N, 0.22N and 0.05N, respectivel. D. Results of the data transmission The data rate as a function of time is shown in Fig. 8(g). From 0s to about 3s and after 9s, the slave is in free space. Thus, the estimated object model is updated at a rate of 2H. According to Fig. 9(g), the average data sie of each depth image frame is about 3.8kBte (including the rotation and translation parameters for the coordinate transformation). Therefore, the data rate in the communication channel is computed as r = 3.8kB 2H = 7.4kB/s. The maimal encoding time (including prefiltering) for all the frames are less than 5ms, which is negligible compared to the communication dela and thus meets the demands of the computational time for the real-time coding. During the slave s contact with the object, the updating of the point cloud model is stopped. However, the data rate is still not completel ero since parameters such as the slave contact position p s0 and the estimated contact position p A in the point cloud model are needed for computing the displacement vectors on the master side (Fig. 7). E. Discussion 1) Noise reduction: As discussed in Sec. 2, a small modeling error due to the noise of the depth image results in

6 a force mismatch between the slave and master. Although a control scheme is emploed to compensate for this error, with increasing delas the shifting of the object model leads to unrealistic eperience during the interaction [18]. Therefore, additional filters and distance sensors with higher resolution are needed to further reduce the estimation errors. 2) Estimation of the phsical properties: In this paper we onl consider the object as a static rigid bod. However, for comple environments more phsical properties (such as softness and friction) should be included. To estimate these properties, an online estimation algorithm should be developed with inputs of the slave position, force and the point cloud model. Additional sensors could also be applied to remotel measure the object properties even before the slave is in contact with the environment. 3) Model update: In our sstem, the point cloud model is no longer updated while the slave is in contact with the object. However, both object geometr and phsical properties could change over time. Therefore, a new updating scheme should be applied on the slave side to manage when and how to update the point cloud model while the slave is in contact with the object. A simple wa to trigger the updates is to use the perceptual deadband method proposed in [19]: if the force difference between the slave and master is larger than a threshold, an update is triggered (Fig. 10). The updating of the object model can be for both geometric and phsical properties of the entire or partial point cloud model. network P'm / f'm Point cloud modeling fs Updating f'm controller Encoder Ps / fs Coordinate Transf. Prefilter Depth image sstem Fig. 10: The modified sstem structure for the model updating. IV. CONCLUSION In this paper, we propose a point-cloud-based modelmediated teleoperation (MMT) sstem to etend the current MMT approach to comple environments in si DoF. In our sstem, the environment model is no longer approimated b a simple planar surface, but b point cloud. A ToF camera is emploed to capture the depth images of the environment at high frame rate. Due to the fleible work range of the ToF camera, the updating of the point cloud model is available even when the slave is close to the object, which enables a stable and precise modeling of the object geometric properties. Function blocks such as pre-filtering, coordinate transformation, haptic rendering and depth image codec are developed accordingl. In addition, the use of the position/force control scheme compensates for the estimation error of the model position due to the noise of the depth image measuring. In future work, the potential etensions that are discussed in Sec. 3 will be implemented. In addition, we plan to develop the modeling algorithm for more comple environment such as soft and deformable objects. Moreover, subjective eperiments will be conducted to evaluate both the subjective eperience and the objective task performance of the proposed sstem. ACKNOWLEDGMENT This work has been supported b the European Research Council under the European Unions Seventh Framework Programme (FP7/ ) / ERC Grant agreement no The authors would like to thank Nicolas Alt, Clemens Schuwerk, Rahul Chaudhari and Anas Al-Nuaimi for their technical support on this paper. REFERENCES [1] W. Ferrell and T. Sheridan. Supervisor control of remote manipulation. IEEE Spectrum, vol. 4, no. 10, pp , [2] D. Lawrence. Stabilit and transparenc in bilateral teleoperation. IEEE Transactions on Robotics and Automation, vol. 9, pp , [3] G. Niemeer and J.-J. Slotine. Stable Adaptive Teleoperation. IEEE Journal of Oceanic Engineering, vol. 16, pp , [4] R. Daniel and P. McAree, Fundamental limits of performance for force reflecting teleoperation. The Int. J. of Robotics Research, vol. 17, no. 8, pp , [5] A. Bejc, W. Kim and S. Venema. The phantom robot: predictive displas for teleoperation with time dela. In Proceeding of the IEEE international conference on robotics and automation, [6] A. Bejc and W. Kim. Predictive displas and shared compliance control for time-delaed telemanipulation. In Proceeding of the international conference on IROS, Ibaraki, Japan, [7] Tim Burkert, Jan Leupold and Georg Passig. A Photo-Realistic Predictive Displa. Presence: Teleoperators and Virtual Environments, vol. 13, no. 1, pp , [8] P. Mitra and G. Niemeer. Model mediated telemanipulation. International Journal of Robotics Research, vol. 27, no. 2, pp , [9] B. Willaert, J. Bohg, H. Brussel and G. Niemeer. Towards multi-dof model mediated teleoperation: using vision to augment feedback. IEEE International Workshop on HAVE, [10] H. Li and A. Song. Virtual-Environment Modeling and Correction for Force-Reflecting Teleoperation With Time Dela. IEEE Transactions on Industrial Electronics, vol. 54, no. 2, pp , [11] Farid Mobasser and Kevan Hashtrudi-Zaad. Predictive Teleoperation using Laser Rangefinder. In Proceeding of CCECE, [12] X. Xu, J. Kammerl, R. Chaudhari and E. Steinbach. Hbrid signalbased and geometr-based prediction for haptic data reduction. IEEE International Workshop on HAVE, [13] A. Achhammer, C. Weber, A. Peer and M. Buss. Improvement of model-mediated teleoperation using a new hbrid environment estimation technique. In Proc. of the international conference on Robotics and Automation, [14] B. Willaert, H. Brussel and G. Niemeer. Stabilit of Model-Mediated Teleoperation: Discussion and Eperiments. Eurohaptics, [15] Manuel M. Oliveira, Brian Bowen, Richard McKenna and Yu-sung Chang. Fast Digital Image Inpainting. Proc. of the international conference on VIIP, Marbella, Spain, [16] Fredrik Rden, Sina Nia Kosari and Howard Ja Chieck. Pro Method for Fast Haptic Rendering from Time Varing Point Clouds. Proc. of 2011 IEEE/RSJ international conference on Intelligent Robots and Sstems, San Francisco, CA, USA, [17] Poppinga, J., N. Vaskevicius, A. Birk and K. Pathak. Fast plane detection and polgonaliation in nois 3d range images. International Conference on Intelligent Robots and Sstems, Nice, France, [18] X. Xu, G. Paggetti and E. Steinbach, Dnamic Model Displacement for Model-mediated Teleoperation. IEEE World Haptics Conference, Daejeon, Korea, [19] P. Hinterseer, E. Steinbach, S. Hirche and M. Buss. A novel, pschophsicall motivated transmission approach for haptic data streams in telepresence and teleaction sstems. Proc. of ICASSP, Philadelphia, PA, USA, 2005.

3D Arm Motion Tracking for Home-based Rehabilitation

3D Arm Motion Tracking for Home-based Rehabilitation hapter 13 3D Arm Motion Tracking for Home-based Rehabilitation Y. Tao and H. Hu 13.1 Introduction This paper presents a real-time hbrid solution to articulated 3D arm motion tracking for home-based rehabilitation

More information

Virtual Power Limiter System which Guarantees Stability of Control Systems

Virtual Power Limiter System which Guarantees Stability of Control Systems Virtual Power Limiter Sstem which Guarantees Stabilit of Control Sstems Katsua KANAOKA Department of Robotics, Ritsumeikan Universit Shiga 525-8577, Japan Email: kanaoka@se.ritsumei.ac.jp Abstract In this

More information

Analyzing Facial Expressions for Virtual Conferencing

Analyzing Facial Expressions for Virtual Conferencing Animating Virtual Humans Analzing Facial Epressions for Virtual Conferencing Video coding applications such as video conferencing, telepresence, and teleteaching have attracted considerable interest in

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

Affine Transformations

Affine Transformations A P P E N D I X C Affine Transformations CONTENTS C The need for geometric transformations 335 C2 Affine transformations 336 C3 Matri representation of the linear transformations 338 C4 Homogeneous coordinates

More information

Client Based Power Iteration Clustering Algorithm to Reduce Dimensionality in Big Data

Client Based Power Iteration Clustering Algorithm to Reduce Dimensionality in Big Data Client Based Power Iteration Clustering Algorithm to Reduce Dimensionalit in Big Data Jaalatchum. D 1, Thambidurai. P 1, Department of CSE, PKIET, Karaikal, India Abstract - Clustering is a group of objects

More information

Intelligent Robotics Lab.

Intelligent Robotics Lab. 1 Variable Stiffness Actuation based on Dual Actuators Connected in Series and Parallel Prof. Jae-Bok Song (jbsong@korea.ac.kr ). (http://robotics.korea.ac.kr) ti k Depart. of Mechanical Engineering, Korea

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Addition and Subtraction of Vectors

Addition and Subtraction of Vectors ddition and Subtraction of Vectors 1 ppendi ddition and Subtraction of Vectors In this appendi the basic elements of vector algebra are eplored. Vectors are treated as geometric entities represented b

More information

View Sequence Coding using Warping-based Image Alignment for Multi-view Video

View Sequence Coding using Warping-based Image Alignment for Multi-view Video View Sequence Coding using Warping-based mage Alignment for Multi-view Video Yanwei Liu, Qingming Huang,, Wen Gao 3 nstitute of Computing Technology, Chinese Academy of Science, Beijing, China Graduate

More information

PCL - SURFACE RECONSTRUCTION

PCL - SURFACE RECONSTRUCTION PCL - SURFACE RECONSTRUCTION TOYOTA CODE SPRINT Alexandru-Eugen Ichim Computer Graphics and Geometry Laboratory PROBLEM DESCRIPTION 1/2 3D revolution due to cheap RGB-D cameras (Asus Xtion & Microsoft

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

Rotation and Inter interpolation Using Quaternion Representation

Rotation and Inter interpolation Using Quaternion Representation This week CENG 732 Computer Animation Spring 2006-2007 Week 2 Technical Preliminaries and Introduction to Keframing Recap from CEng 477 The Displa Pipeline Basic Transformations / Composite Transformations

More information

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks Paul.banks@tetravue.com

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks Paul.banks@tetravue.com High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks Paul.banks@tetravue.com Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D

More information

A Study on Intelligent Video Security Surveillance System with Active Tracking Technology in Multiple Objects Environment

A Study on Intelligent Video Security Surveillance System with Active Tracking Technology in Multiple Objects Environment Vol. 6, No., April, 01 A Stud on Intelligent Video Securit Surveillance Sstem with Active Tracking Technolog in Multiple Objects Environment Juhun Park 1, Jeonghun Choi 1, 1, Moungheum Park, Sukwon Hong

More information

Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet

Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet K. Ramkishor James. P. Mammen

More information

NETWORK TRAFFIC REDUCTION IN HAPTIC TELEPRESENCE SYSTEMS BY DEADBAND CONTROL. Technische Universität München D-80290 München, Germany

NETWORK TRAFFIC REDUCTION IN HAPTIC TELEPRESENCE SYSTEMS BY DEADBAND CONTROL. Technische Universität München D-80290 München, Germany NETWORK TRAFFIC REDUCTION IN HAPTIC TELEPRESENCE SYSTEMS BY DEADBAND CONTROL S. Hirche 1, P. Hinterseer 2, E. Steinbach 2, and M. Buss 1 1 Institute of Automatic Control Engineering 2 Institute of Communication

More information

A Remote Maintenance System with the use of Virtual Reality.

A Remote Maintenance System with the use of Virtual Reality. ICAT 2001 December 5-7, Tokyo, JAPAN A Remote Maintenance System with the use of Virtual Reality. Moez BELLAMINE 1), Norihiro ABE 1), Kazuaki TANAKA 1), Hirokazu TAKI 2) 1) Kyushu Institute of Technology,

More information

Lecture 3: Teleoperation

Lecture 3: Teleoperation ME 328: Medical Robotics Spring 2015 Lecture 3: Teleoperation Allison Okamura Stanford University Announcements 1. I am not Allison. 2. You re a huge class. So, we found a 2nd CA! meet Jeesu Baek jeesu@stanford.edu

More information

Fuzzy logic control of a robot manipulator in 3D based on visual servoing

Fuzzy logic control of a robot manipulator in 3D based on visual servoing Preprints of the 8th IFAC World Congress Milano (Ital) August 8 - September, Fu logic control of a robot manipulator in 3D based on visual servoing Maximiliano Bueno-Lópe and Marco A. Arteaga-Pére Departamento

More information

Scooter, 3 wheeled cobot North Western University. PERCRO Exoskeleton

Scooter, 3 wheeled cobot North Western University. PERCRO Exoskeleton Scooter, 3 wheeled cobot North Western University A cobot is a robot for direct physical interaction with a human operator, within a shared workspace PERCRO Exoskeleton Unicycle cobot the simplest possible

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

A Real-Time Graphic Environment for a Urological Operation Training Simulator *

A Real-Time Graphic Environment for a Urological Operation Training Simulator * Proceedings 2004 IEEE International Conference on Robotics and Automation, ICRA 04, New Orleans, Louisiana, April 26- Ma 1, 2004 A Real-Time Graphic Environment for a Urological Operation Training Simulator

More information

Multimedia Data Transmission over Wired/Wireless Networks

Multimedia Data Transmission over Wired/Wireless Networks Multimedia Data Transmission over Wired/Wireless Networks Bharat Bhargava Gang Ding, Xiaoxin Wu, Mohamed Hefeeda, Halima Ghafoor Purdue University Website: http://www.cs.purdue.edu/homes/bb E-mail: bb@cs.purdue.edu

More information

4BA6 - Topic 4 Dr. Steven Collins. Chap. 5 3D Viewing and Projections

4BA6 - Topic 4 Dr. Steven Collins. Chap. 5 3D Viewing and Projections 4BA6 - Topic 4 Dr. Steven Collins Chap. 5 3D Viewing and Projections References Computer graphics: principles & practice, Fole, vandam, Feiner, Hughes, S-LEN 5.644 M23*;-6 (has a good appendix on linear

More information

Towards Ultrasound Image-Based Visual Servoing

Towards Ultrasound Image-Based Visual Servoing Towards Ultrasound Image-Based Visual Servoing Wael Bachta and Aleandre Krupa IRISA - INRIA Rennes Campus de Beaulieu, 4 Rennes Cede, France wael@eavr.u-strasbg.fr, aleandre.krupa@irisa.fr Abstract Robotized

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

What is a robot? Lecture 2: Robot Basics. Remember the Amigobot? Describing the Amigobot. The Unicycle Model. Modeling Robot Interaction.

What is a robot? Lecture 2: Robot Basics. Remember the Amigobot? Describing the Amigobot. The Unicycle Model. Modeling Robot Interaction. What is a robot? Lectre 2: Basics CS 344R/393R: ics Benjamin Kipers A robot is an intelligent sstem that interacts with the phsical environment throgh sensors and effectors. Toda we discss: Abstraction

More information

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference

Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best

More information

SERVO CONTROL SYSTEMS 1: DC Servomechanisms

SERVO CONTROL SYSTEMS 1: DC Servomechanisms Servo Control Sstems : DC Servomechanisms SERVO CONTROL SYSTEMS : DC Servomechanisms Elke Laubwald: Visiting Consultant, control sstems principles.co.uk ABSTRACT: This is one of a series of white papers

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

Solving Quadratic Equations by Graphing. Consider an equation of the form. y ax 2 bx c a 0. In an equation of the form

Solving Quadratic Equations by Graphing. Consider an equation of the form. y ax 2 bx c a 0. In an equation of the form SECTION 11.3 Solving Quadratic Equations b Graphing 11.3 OBJECTIVES 1. Find an ais of smmetr 2. Find a verte 3. Graph a parabola 4. Solve quadratic equations b graphing 5. Solve an application involving

More information

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,

More information

Véronique PERDEREAU ISIR UPMC 6 mars 2013

Véronique PERDEREAU ISIR UPMC 6 mars 2013 Véronique PERDEREAU ISIR UPMC mars 2013 Conventional methods applied to rehabilitation robotics Véronique Perdereau 2 Reference Robot force control by Bruno Siciliano & Luigi Villani Kluwer Academic Publishers

More information

Super-resolution method based on edge feature for high resolution imaging

Super-resolution method based on edge feature for high resolution imaging Science Journal of Circuits, Systems and Signal Processing 2014; 3(6-1): 24-29 Published online December 26, 2014 (http://www.sciencepublishinggroup.com/j/cssp) doi: 10.11648/j.cssp.s.2014030601.14 ISSN:

More information

CS 534: Computer Vision 3D Model-based recognition

CS 534: Computer Vision 3D Model-based recognition CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!

More information

A Comparison of 3D Sensors for Wheeled Mobile Robots

A Comparison of 3D Sensors for Wheeled Mobile Robots A Comparison of 3D Sensors for Wheeled Mobile Robots Gerald Rauscher, Daniel Dube and Andreas Zell Chair of Cognitive Systems, University of Tuebingen, Sand 1, 72076 Tuebingen, Germany {g.rauscher, daniel.dube,

More information

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine The Blender Game Engine This week we will have an introduction to the Game Engine build

More information

Wii Remote Calibration Using the Sensor Bar

Wii Remote Calibration Using the Sensor Bar Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr

More information

Development of Docking System for Mobile Robots Using Cheap Infrared Sensors

Development of Docking System for Mobile Robots Using Cheap Infrared Sensors Development of Docking System for Mobile Robots Using Cheap Infrared Sensors K. H. Kim a, H. D. Choi a, S. Yoon a, K. W. Lee a, H. S. Ryu b, C. K. Woo b, and Y. K. Kwak a, * a Department of Mechanical

More information

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

More information

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal

More information

Simulation of Electromagnetic Leakage from a Microwave Oven

Simulation of Electromagnetic Leakage from a Microwave Oven Simulation of Electromagnetic Leakage from a Microwave Oven Ana Maria Rocha (1), Margarida Facão (), João Pedro Sousa (3), António Viegas (4) (1) Departamento de Física, Universidade de Aveiro, Teka Portugal

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes 1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky klitomis@cs.ucr.edu Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research

More information

Accurate and robust image superresolution by neural processing of local image representations

Accurate and robust image superresolution by neural processing of local image representations Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR

DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 49th ANNUAL MEETING 2005 2235 DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR Geb W. Thomas and Li

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Realit Viewing and projection Classical and General Viewing Transformation Pipeline CPU Pol. DL Pixel Per Vertex Texture Raster Frag FB object ee clip normalized device

More information

Point Clouds Can Be Represented as Implicit Surfaces for Constraint-Based Haptic Rendering

Point Clouds Can Be Represented as Implicit Surfaces for Constraint-Based Haptic Rendering 2012 IEEE International Conference on Robotics and Automation RiverCentre, Saint Paul, Minnesota, USA May 14-18, 2012 Point Clouds Can Be Represented as Implicit Surfaces for Constraint-Based Haptic Rendering

More information

2D Geometrical Transformations. Foley & Van Dam, Chapter 5

2D Geometrical Transformations. Foley & Van Dam, Chapter 5 2D Geometrical Transformations Fole & Van Dam, Chapter 5 2D Geometrical Transformations Translation Scaling Rotation Shear Matri notation Compositions Homogeneous coordinates 2D Geometrical Transformations

More information

Laser Gesture Recognition for Human Machine Interaction

Laser Gesture Recognition for Human Machine Interaction International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-04, Issue-04 E-ISSN: 2347-2693 Laser Gesture Recognition for Human Machine Interaction Umang Keniya 1*, Sarthak

More information

Cloud Mediated Nature Observation - From Teleoperation to Cloud Robotics

Cloud Mediated Nature Observation - From Teleoperation to Cloud Robotics Aug. 17, 2013, IEEE/NSF Workshop on Cloud Manufacturing and Automation, Madison, WI Cloud Mediated Nature Observation - From Teleoperation to Cloud Robotics Dez Song Texas A&M University Thanks to: Ni

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

GOM Optical Measuring Techniques. Deformation Systems and Applications

GOM Optical Measuring Techniques. Deformation Systems and Applications GOM Optical Measuring Techniques Deformation Systems and Applications ARGUS Forming Analysis ARGUS Deformation analysis in sheet metal and forming industry Forming Characteristics of Sheet Metals Material

More information

Case Study: Real-Time Video Quality Monitoring Explored

Case Study: Real-Time Video Quality Monitoring Explored 1566 La Pradera Dr Campbell, CA 95008 www.videoclarity.com 408-379-6952 Case Study: Real-Time Video Quality Monitoring Explored Bill Reckwerdt, CTO Video Clarity, Inc. Version 1.0 A Video Clarity Case

More information

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification Aras Dargazany 1 and Karsten Berns 2 Abstract In this paper, an stereo-based

More information

Synthetic Sensing: Proximity / Distance Sensors

Synthetic Sensing: Proximity / Distance Sensors Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,

More information

IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD

IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD R.Dhanya 1, Mr. G.R.Anantha Raman 2 1. Department of Computer Science and Engineering, Adhiyamaan college of Engineering(Hosur).

More information

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling , March 13-15, 2013, Hong Kong Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling Naveed Ahmed Abstract We present a system for spatio-temporally

More information

Impedance 50 (75 connectors via adapters)

Impedance 50 (75 connectors via adapters) VECTOR NETWORK ANALYZER PLANAR TR1300/1 DATA SHEET Frequency range: 300 khz to 1.3 GHz Measured parameters: S11, S21 Dynamic range of transmission measurement magnitude: 130 db Measurement time per point:

More information

COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU

COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU COMPUTING CLOUD MOTION USING A CORRELATION RELAXATION ALGORITHM Improving Estimation by Exploiting Problem Knowledge Q. X. WU Image Processing Group, Landcare Research New Zealand P.O. Box 38491, Wellington

More information

INTRODUCTION TO ERRORS AND ERROR ANALYSIS

INTRODUCTION TO ERRORS AND ERROR ANALYSIS INTRODUCTION TO ERRORS AND ERROR ANALYSIS To many students and to the public in general, an error is something they have done wrong. However, in science, the word error means the uncertainty which accompanies

More information

Task Directed Programming of Sensor Based Robots

Task Directed Programming of Sensor Based Robots Task Directed Programming of Sensor Based Robots B. Brunner, K. Arbter, G. Hirzinger German Aerospace Research Establishment (DLR), Oberpfaffenhofen, Institute for Robotics and System Dynamics, Postfach

More information

Human Interaction with Robots Working in Complex and Hazardous Environments

Human Interaction with Robots Working in Complex and Hazardous Environments Human Interaction with Robots Working in Complex and Hazardous Environments Bill Hamel, Professor & Head IEEE Fellow RAS Vice President for Publication Activities Mechanical, Aerospace, & Biomedical Engineering

More information

Analyzing Facial Expressions for Virtual Conferencing

Analyzing Facial Expressions for Virtual Conferencing IEEE Computer Graphics & Applications, pp. 70-78, September 1998. Analyzing Facial Expressions for Virtual Conferencing Peter Eisert and Bernd Girod Telecommunications Laboratory, University of Erlangen,

More information

Identifying second degree equations

Identifying second degree equations Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM AND A NEW METHOD OF RESULTS ANALYSIS 1. INTRODUCTION

SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM AND A NEW METHOD OF RESULTS ANALYSIS 1. INTRODUCTION Journal of Machine Engineering, Vol. 15, No.1, 2015 machine tool accuracy, metrology, spindle error motions Krzysztof JEMIELNIAK 1* Jaroslaw CHRZANOWSKI 1 SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM

More information

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical

More information

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,

More information

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 S. Heymann, A. Smolic, K. Mueller, Y. Guo, J. Rurainsky, P. Eisert, T. Wiegand Fraunhofer Institute

More information

A Surveillance Robot with Climbing Capabilities for Home Security

A Surveillance Robot with Climbing Capabilities for Home Security Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 11, November 2013,

More information

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and

More information

Introduction to Robotics Analysis, Systems, Applications

Introduction to Robotics Analysis, Systems, Applications Introduction to Robotics Analysis, Systems, Applications Saeed B. Niku Mechanical Engineering Department California Polytechnic State University San Luis Obispo Technische Urw/carsMt Darmstadt FACHBEREfCH

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

w = COI EYE view direction vector u = w ( 010,, ) cross product with y-axis v = w u up vector

w = COI EYE view direction vector u = w ( 010,, ) cross product with y-axis v = w u up vector . w COI EYE view direction vector u w ( 00,, ) cross product with -ais v w u up vector (EQ ) Computer Animation: Algorithms and Techniques 29 up vector view vector observer center of interest 30 Computer

More information

Efficient Coding Unit and Prediction Unit Decision Algorithm for Multiview Video Coding

Efficient Coding Unit and Prediction Unit Decision Algorithm for Multiview Video Coding JOURNAL OF ELECTRONIC SCIENCE AND TECHNOLOGY, VOL. 13, NO. 2, JUNE 2015 97 Efficient Coding Unit and Prediction Unit Decision Algorithm for Multiview Video Coding Wei-Hsiang Chang, Mei-Juan Chen, Gwo-Long

More information

Integration Services

Integration Services Integration Services EXPERIENCED TEAM ADVANCED TECHNOLOGY PROVEN SOLUTIONS Integrations for large scale metrology applications Metris metrology to streamline your CAPABILITIES Advanced systems design Engineering

More information

Pulsed Fourier Transform NMR The rotating frame of reference. The NMR Experiment. The Rotating Frame of Reference.

Pulsed Fourier Transform NMR The rotating frame of reference. The NMR Experiment. The Rotating Frame of Reference. Pulsed Fourier Transform NR The rotating frame of reference The NR Eperiment. The Rotating Frame of Reference. When we perform a NR eperiment we disturb the equilibrium state of the sstem and then monitor

More information

B4 Computational Geometry

B4 Computational Geometry 3CG 2006 / B4 Computational Geometry David Murray david.murray@eng.o.ac.uk www.robots.o.ac.uk/ dwm/courses/3cg Michaelmas 2006 3CG 2006 2 / Overview Computational geometry is concerned with the derivation

More information

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl uhl@cosy.sbg.ac.

Video Encryption Exploiting Non-Standard 3D Data Arrangements. Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl uhl@cosy.sbg.ac. Video Encryption Exploiting Non-Standard 3D Data Arrangements Stefan A. Kramatsch, Herbert Stögner, and Andreas Uhl uhl@cosy.sbg.ac.at Andreas Uhl 1 Carinthia Tech Institute & Salzburg University Outline

More information

3 The boundary layer equations

3 The boundary layer equations 3 The boundar laer equations Having introduced the concept of the boundar laer (BL), we now turn to the task of deriving the equations that govern the flow inside it. We focus throughout on the case of

More information

Physics 53. Kinematics 2. Our nature consists in movement; absolute rest is death. Pascal

Physics 53. Kinematics 2. Our nature consists in movement; absolute rest is death. Pascal Phsics 53 Kinematics 2 Our nature consists in movement; absolute rest is death. Pascal Velocit and Acceleration in 3-D We have defined the velocit and acceleration of a particle as the first and second

More information

MMX-Accelerated Real-Time Hand Tracking System

MMX-Accelerated Real-Time Hand Tracking System X-Accelerated Real-Time Hand Tracing Sstem Nianjun Liu and Brian C. Lovell Intelligent Real-Time Imaging and Sensing (IRIS) Group School of Computer Science and Electrical Engineering The Universit of

More information

Parametric Comparison of H.264 with Existing Video Standards

Parametric Comparison of H.264 with Existing Video Standards Parametric Comparison of H.264 with Existing Video Standards Sumit Bhardwaj Department of Electronics and Communication Engineering Amity School of Engineering, Noida, Uttar Pradesh,INDIA Jyoti Bhardwaj

More information

Low-resolution Character Recognition by Video-based Super-resolution

Low-resolution Character Recognition by Video-based Super-resolution 2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro

More information

High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications

High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications Jakob Wasza 1, Sebastian Bauer 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab, Friedrich-Alexander University

More information

WHITE PAPER Personal Telepresence: The Next Generation of Video Communication. www.vidyo.com 1.866.99.VIDYO

WHITE PAPER Personal Telepresence: The Next Generation of Video Communication. www.vidyo.com 1.866.99.VIDYO WHITE PAPER Personal Telepresence: The Next Generation of Video Communication www.vidyo.com 1.866.99.VIDYO 2009 Vidyo, Inc. All rights reserved. Vidyo is a registered trademark and VidyoConferencing, VidyoDesktop,

More information

Construction and experiment on micro-gyroscope detection balance loop

Construction and experiment on micro-gyroscope detection balance loop International Conference on Manufacturing Science and Engineering (ICMSE 05) Construction and experiment on micro-groscope detection balance loop Wang Xiaolei,,a *, Zhao Xiangang3, Cao Lingzhi, Liu Yucui,

More information

A Prototype For Eye-Gaze Corrected

A Prototype For Eye-Gaze Corrected A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

2.1 Three Dimensional Curves and Surfaces

2.1 Three Dimensional Curves and Surfaces . Three Dimensional Curves and Surfaces.. Parametric Equation of a Line An line in two- or three-dimensional space can be uniquel specified b a point on the line and a vector parallel to the line. The

More information

Study and Implementation of Video Compression standards (H.264/AVC, Dirac)

Study and Implementation of Video Compression standards (H.264/AVC, Dirac) Study and Implementation of Video Compression standards (H.264/AVC, Dirac) EE 5359-Multimedia Processing- Spring 2012 Dr. K.R Rao By: Sumedha Phatak(1000731131) Objective A study, implementation and comparison

More information

Development of Easy Teaching Interface for a Dual Arm Robot Manipulator

Development of Easy Teaching Interface for a Dual Arm Robot Manipulator Development of Easy Teaching Interface for a Dual Arm Robot Manipulator Chanhun Park and Doohyeong Kim Department of Robotics and Mechatronics, Korea Institute of Machinery & Materials, 156, Gajeongbuk-Ro,

More information

Determining optimal window size for texture feature extraction methods

Determining optimal window size for texture feature extraction methods IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec

More information

Medical Robotics. Control Modalities

Medical Robotics. Control Modalities Università di Roma La Sapienza Medical Robotics Control Modalities The Hands-On Acrobot Robot Marilena Vendittelli Dipartimento di Ingegneria Informatica, Automatica e Gestionale Control modalities differ

More information

Autonomous Mobile Robot-I

Autonomous Mobile Robot-I Autonomous Mobile Robot-I Sabastian, S.E and Ang, M. H. Jr. Department of Mechanical Engineering National University of Singapore 21 Lower Kent Ridge Road, Singapore 119077 ABSTRACT This report illustrates

More information

RIEGL VZ-400 NEW. Laser Scanners. Latest News March 2009

RIEGL VZ-400 NEW. Laser Scanners. Latest News March 2009 Latest News March 2009 NEW RIEGL VZ-400 Laser Scanners The following document details some of the excellent results acquired with the new RIEGL VZ-400 scanners, including: Time-optimised fine-scans The

More information

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract

More information

Spike-Based Sensing and Processing: What are spikes good for? John G. Harris Electrical and Computer Engineering Dept

Spike-Based Sensing and Processing: What are spikes good for? John G. Harris Electrical and Computer Engineering Dept Spike-Based Sensing and Processing: What are spikes good for? John G. Harris Electrical and Computer Engineering Dept ONR NEURO-SILICON WORKSHOP, AUG 1-2, 2006 Take Home Messages Introduce integrate-and-fire

More information