MODEL BASED VISUAL RELATIVE MOTION ESTIMATION AND CONTROL OF A SPACECRAFT UTILIZING COMPUTER GRAPHICS
|
|
|
- Hector Chandler
- 9 years ago
- Views:
Transcription
1 MODEL BASED VISUAL RELATIVE MOTION ESTIMATION AND CONTROL OF A SPACECRAFT UTILIZING COMPUTER GRAPHICS Fuyuto Terui Japan Aerospace Exploration Agency Jindaiji-Higashimachi, Chofu-shi, Tokyo , JAPAN TEL : [email protected] Abstract An algorithm is developed for estimating the motion (relative attitude and relative position) of large pieces of space debris, such as failed satellite. The algorithm is designed to be used by a debris removal space robot which would perform six degreeof-freedom motion control (control its position and attitude simultaneously). The information required as feedback signals for such a controller is relative position, velocity, attitude and angular velocity and these are expected to be measured or estimated from image data. The algorithm uses a combination of stereo vision, 3D model matching, applying the ICP (Iterative Closest Point) algorithm, and extended Kalman Filter to increase the reliability of estimates. To evaluate the algorithm, a simulator is prepared to simulate the on-orbit optical environment in terrestrial experiments, and the motion of a miniature satellite model is estimated using images obtained from the simulator. In addition to that, six DOF(Degrees Of Freedom) manoeuvre simulation with developed algorithm for motion estimation using image data was succefully tried for the proximity flight around a failed satellite utilizing CG(Computer Graphics). 1 INTRODUCTION As the number of satellites continues to increase, space debris is becoming an increasingly serious problem for near-earth space activities and effective measures to mitigate it are important. Satellite end-of-life de-orbiting and orbital lifetime reduction capability will be effective in reducing the amount of debris by reducing the probability of collisions, but this approach cannot be applied to inert satellites and debris. On the other hand, a debris removal space robot comprising of a spacecraft ( chaser ) that actively removes space debris and retrieves malfunctioned satellites ( targets ) is considered as a complementary approach.[6] The concept of such a removal space robot is shown in Fig. 1. After rendezvous with the target and approach to approximately 50 m using data from ground observations, satellite navigation positioning and radar, the chaser maintains a constant distance from the target measured using images taken by onboard cameras. During this station-keeping phase, the chaser captures images of the target to allow remote visual inspection and measures its motion by image processing both onboard and on the ground. Since the target is noncooperative, there is no communication, no special markings or retro-reflectors to assist 1
2 Figure 1: On-orbit operation of a debris removal space robot requiring measurement using image data; (top) station keeping and motion measurement; (middle) fly-around and final approach; (bottom) capture of the target image processing, and no handles for capturing the target. The next phase is flying around and final approach. The chaser maneuvers towards the target to within the range of a capturing device such as manipulator arm in order to capture a designated part of the target. During the fly-around and final approach phase, the chaser must control its position and attitude simultaneously. Such six degree-of-freedom control becomes more difficult if the target is changing attitude, such as by nutation. [7] The information required as feedback signals for such a controller is relative position, velocity, attitude and angular velocity and these are expected to be measured or estimated from image data onboard. This paper deals with technical elements, motion estimation and six degree-of-freedom control expected to be necessary for the development of such a debris removal space robot. A terrestrial experiment facility called the On-orbit Visual Environment Simulator is used in this development and is described below. Stereo vision is applied to measure the threedimensional shape of a miniature satellite model using images obtained from this simulator. After that, model-based 3D model matching between groups of point data (the ICP (Iteratice Closed Point) algorithm) is applied to estimate the relative attitude and position of the target with respect to the chaser. Then the output from ICP is used as inputs to extended Kalman Filter to estimate more reliable and accurate relative attitude and position. Since extended Kalman Filter uses the model of translational and attitude motion of both bodies (target and chaser) in space, it is expected that estimates from extended Kalman Filter would not be affected a lot even in the case of data loss from ICP. For the verification of utility of proposed motion estimation algorithm and 6 DOF control algorithm, six DOF(Degrees Of Freedom) manoeuvre simulation with developed motion estimation using images from CG(Computer Graphics) was succefully tried for the proximity flight around a failed satellite. 2 MOTION MEASUREMENT USING IMAGES 2.1 On-orbit visual environment simulator The on-orbit visual environment has two characteristics that make image processing difficult: intense, highly-directional (collimated) sunlight no diffusing background other than the earth 2
3 Figure 2: On-orbit Visual Environment Simulator The first characteristic gives very high image contrast, with the earth s albedo the only diffuse light source. Since the target is non-cooperative it is impossible to hope for markings on its surface specifically designed to assist image processing. Satellites are wrapped in Multi Layer Insulation (MLI) materials for thermal protection such as second-surface aluminized Kapton (gold, specular), beta cloth (white, matte) and carbon-polyester coated Kapton (black, matte). Among these, aluminized Kapton is most commonly used and it gives the target the following optical characteristics. specular and wrinkled surface smooth edges Because of this, optical features such as texture changes according to the direction of lighting and view. Considering that it is not easy to mimick such images by computer graphics with sufficient fidelity to be used for developing image processing algorithms, a visual simulator is prepared in order to reproduce the characteristics of the on-orbit visual environment for preliminary terrestrial experiments. Fig. 2 shows the configuration of the visual simulator at JAXA. This facility is the 1/10 model of the actual on orbit configuration and generates simulated images taken by a chaser on-orbit. A light source simulating the sun illuminates a miniature satellite target and an earth albedo reflector that adds diffuse light to the environment. Actuators change the direction of the light source to simulate the change of sunlight direction caused by orbital motion. The attitude of the miniature satellite target is altered using a three-axis gimbal mechanism, and the position and attitude of the chaser stereo camera set is changed using a linear motion stage and a gimbal mechanism to simulate relative motion. Themonochromestereocamera( pixels) with parallel lines-of-sight and 40 mm base line distance is located 1870 mm from the center of the model. The satellite model comprises a cube measuring approximately 300 mm 250 mm 200 mm with a 400 mm 200 mm solar paddle and a radar antenna. The surface of the model is wrapped by specular and wrinkled sheet simulating MLI. 3
4 Figure 3: (top-left)a Motion Estimation using On-orbit Visual Environment Simulator, (top-right)motion Estimation utilizing CG, (bottom-left)6-dof manoeuvre simulation with motion estimation utilizing CG, (bottom-right)hardware-in the Loop simulation for 6-DOF manoeuvre 2.2 Motion estimation algorithm Various researchers have investigated image processing techniques for measuring the motion of targets in the space environment. Their methods vary depending upon their research objectives and assumptions. [4], [5], [1]) For this paper, we sought a strategy that would be able to use 3D shape information obtained from ordinary image data and that would not be affected by data loss from shadows, occulusion or specular reflection. [8] Fig. 4 shows our strategy for motion measurement applying to the experiment of fig. 3 : top-left. The area-based stereo matching algorithm gives 3D position data for the viewable area of the satellite model s surface from a pair of cameras, but since the only limited area of the model is in view at any one instant, and the appearance of the area changes according to the model s attitude and the direction of illumination, the number of measured points obtained tends to be limited. The stereo matching algorithm also gives many incorrectly measured points due to matching failures, further reducing the number of accurately measured points obtained. However, it is assumed that the geometry of the satellite can be obtained from design data, and so a three-dimensional shape model of the target may be constructed a priori and model matching can be applied between this model and the 3D measurement points obtained from stereo matching, allowing relative position and relative attitude to be estimated. A matching algorithm called the ICP 4
5 Figure 4: A Motion Estimation Strategy using Image (Iterative Closest Point) algorithm is used for this purpose. Using measured relative position and attitude as a input, extended Kalman Filter outputs estimated these values with reducednoiseandincreasedreliability. Extended Kalman Filter output predicted relative position and attitude as well as estimated ones. These are used for pre-alignment and front surface generation which is used as a model 3D points in the process of ICP resulting good matching result. 2.3 Stereo vision The area-based stereo vision algorithm uses multiple images taken from different viewpoints and generates a 3D disparity map which contains displacement information between the cameras and points on the object. The final output of the algorithm is the 3D shape of the part of the object in view reconstructed from the disparity map. In the area-based stereo method, a small window (e.g pixels) around each pixel in the image is matched using texture ; i.e. by minimizing an image similarity function such as SAD (Sum of Absolute Differences) of pixel intensities, defined as follows: Q SAD (I,J,d(I,J)) = N 1 X p= N 1 N 2 X q= N 2 F 1 (I + p, J + q, d) G 0 (I + p, J + q) (1) F 1 (I,J,d)=G 1 (I + d, J). (2) where the size of the matching window is (2N 1 +1) (2N 2 +1). G 0 (I,J) is the intensity of a pixel in the reference camera image (left camera) and G 1 (I,J) is the intensity of the corresponding pixel in the right camera image. d(i,j) is the disparity, which is the positional difference of the corresponding point from the reference point, and this should be optimized to minimize Q SAD (I,J,d)foreachunitpixel. F 1 (I,J,d) in eq.(2) is the pixel intensity of apoint(i + d, J) ing 1 which is a candidate for the corresponding point to G 0 (I,J). It should be noted that since the pair of cameras used are aligned side-by-side with parallel lines-of-sight, the epipolar line for finding the matching window in the right camera image 5
6 Figure 5: Mmeasured 3D points by stereo vision will be parallel to the horizontal image axis. Therefore, d appears only in the horizontal image coordinate of G 1 in eq.(2). The optimized disparities d (I,J) d (I,J)=argmin Q SAD (I,J,d(I,J)) (3) d are then obtained for all pixels in G 0, and make up the disparity map. To pre-process images for stereo matching, the LoG (Laplacian of Gaussian) filter, which is a spatial band pass filter, is used for extracting and enhancing specific features in image. Fig. 5 shows an example of the result of applying the stereo vision algorithm to a satellite model with specular reflection. The attitude of the miniature satellite is fixed and the stereo camera is located 1870 mm from the center of the model. The satellite model comprises a cube measuring approximately 300 mm 250 mm 200mm with a 400mm 200 mm solar paddle and a 320 mm 150 mm radar antenna. A pair of digital CCD cameras with resolution of pixels are used. The left pictures in fig. 5 show the images taken by each camera after compensation of lens distortion. It is quite difficult to notice the difference between them at first glance, but differences in the horizontal position between corresponding parts (disparity) can be observed. The right of the figure shows measured 3D points. The original 3D points have lots of spurious noise-like outliers sprinkled around the satellite shape. Some of these are from the albedo reflector, the background curtain behind the model and the 3-axis gimbal mechanism beneath the model. The dots from the reflector and the curtain were removed using 3D position thresholding. The dots from the gimbal mechanism were suppressed by wrapping it using matte black cloths. Other noisy spurious points appear as if they were sprayed from the position of the camera and these are thought to be points where the stereo vision algorithm failed to find a matching window using texture for determining disparity. These are eliminated by Curvature masking, Left-Right Consistency Checking and Median Filtering. It can be seen that 3D point positions are obtained for only a limited part of the model. 2.4 ICP algorithm Fig. 6 shows the computational flow of the ICP algorithm.[3] The superscript (m) shows iteration number. This handles two point sets such as a measured data point set P and a model data point set X (m). It first establishes correspondences between the data sets by finding the closest point y (m) in X (m) for each point p i i in P (the 3rd box). Next, it finds the rotational transformation matrix R (m) and translational transformation matrix T (m) for 6
7 Figure 6: ICP algorithm Y (m) which minimizes the cost function J shown in Fig. 6 (the 4th box). This cost function is the summation of the distance between corresponding points p i and y (m) i. The algorithm in Horn (Ref. [2]) gives closed-form solution for this problem using the eigenvalue problem and finally estimated relative position ˆx, ŷ, ẑ which gives T and the estimated quaternion ˆq which gives optimal R are directly calculated. X (m) and Y (m) are then transformed using R (m) and T (m) (the 5th box). This process is repeated until convergence criteria E T E is achieved or iteration exceeds the threshold m T m (the 6th and 7th box). 2.5 extended Kalman filter and front surface model The simple and intuitive ICP algorithm has a number of advantages. Since it is an algorithm for point sets, it is independent of shape representation and does not require any local feature extraction. It can also handle a reasonable amount of noise such as mismatched dots from stereo vision. The main disadvantage of ICP alogorithm is that correct registration (matching) is not guaranteed. Depending on the initial relative position and attitude between two point sets, the result of matching could fall into a local minimum. In order to prevent this, pre-alignment ofthemodeldatasetrelativetothemeasured data set using some measure is required. The information used for the pre-alignment is predicted measurement given by extended Kalman Filter. The state and measurement of extended Kalman Filter for position (X p, Y p ) and attitude (X a, Y a )areasfollows, " # " # r H X p = TC r C, Y p = CT (4) ṙ H TC r C CT 7
8 Figure 7: ICP using time series of images (0, 20, 40, 60, 80, 100 sec) [(red) measured points; (blue) model 3D points after ICP] X a = " q T I ω T IT # " q T, Y a = C qc T # (5) where rtc H represents relative position vector from target to chaser expressed in Hill frame {H} (see fig.9) and rct C represents relative position vector from chaser to target expressed in chaser fixed frame {C}. qi T represents quaternion from inertia frame {I} to target fixed frame {T } and qc T represents quaternion from chaser fixed frame {C} to target fixed frame {T }. means difference from the previous time step. Using predicted measurement Y p (k +1/k) andy a (k +1/k) itispossibletoknowthe portion of the 3D model data point set X which could be seen from binocular camera and it corresponds to the measured data point set P. This viewable part of the 3D model data point set is called Front Surface model and is used as redefined X for ICP. 8
9 Figure 8: Estimated relative attitude (quaternion) 2.6 Motion estimation experiment using terrestrial simulator A quasi-real-time program running on Personal Computer (Intel Pentium 4, 2.0GHz, 500MB RAM) was developed which repeatedly performs model satellite attitude motion simulation, gimbal mechanism drive, image capture, stereoprocessingandmotionestimationbyicp and Kalman Filter. The attitude of the model satellite is simulated and is updated at 5 sec. time steps. Since the stereo vision processing time is approx. 5 sec. and attitude estimation takes a further 2 11 sec., the program was incapable of real-time processing and so the motion of the gimbal mechanism for the next time step was deferred until the estimation calculation was completed at each iteration. Several tests with different attitude motion have been carried out and one of the results isshowninfigs.8and7. Fig.8showstheactual and estimated relative quaternions. Fig. 7 shows the ICP matching result at each iteration. Measured points at t=0, 5, 10, 15, 45, 50, 55, 85, 90, 95, 100 (sec) are classified reliable and rest of points are classified unreliable. Number of measured points at t=0 (sec) is 1078 and corresponding Front Surface Model has 947 points. Both numbers are the result of thinned up to 1/30 for saving processing time. With the benefit ofthefirst two reliable measured points, it seems that the chosen strategy for the motion estimation using time series of images worked properly in this case. 3 Six degrees of freedom manoeuvre simulation with motion estimation utilizing Computer Graphics After confirming feasibility of the motion estimation algorithm using images taken from the terrestrial experiment (fig 3 : top-left), the same algorithm is applied successfully to the software simulator using Computer Graphics (fig. 3 ; top-right). Then closed loop simulation for six degrees of freedom maneouvre with motion estimation utilizing computer grahics (fig. 3 ; bottom-left) is performed. Fig. 9 explains the conditions of the simulation. The 9
10 target is a failed satellite on Low Earth Orbit and the chaser is a free flying space robot which rendezvous to the target and flies in proximity to the target utilizing image data as measurement. {T }, {C} arecoordinateframesfixed to the target and chaser respectively and {H} is Hill frame with the same origin as {T }. Both position and attitude controller are designed applying sliding mode control. [7] 3.1 Position controller The position control force obtained from sliding mode control is as follows, F = α p m S p. (6) S p + ² p where α p is a feedback gain, m is mass of chaser and ² p is small positive scalar for preventing chattering. The switching surface S p is defined S p = v C e + k p r C e. (7) k p is a constant defining the switching surface. re C and vc e in eq. (7) is relative position and velocity between required point for position control and the center of chaser defined eq. (8), (9)below,aswellasshowninfig. 9. r C e =ˆr C C ˆD C T r T req (8) v C e =ˆvC C ˆD C T ˆωT IT rt req (9) where ˆD T C is direction cosine matrix transforming from {T } to {C} and ˆω IT T is attitude rate of the target. ˆr C, C ˆDC T,ˆv C C and ˆω IT T are all estimated from image data. 3.2 Attitude controller The sliding mode attitude control torque is S a T = α a I. (10) S a + ² a where α a is a feedback gain, I is moment of inertia of chaser and ² a is small positive scalar for preventing chattering. The switching surface S a is defined as follows. S a = ω C e + k a1 q e vector sgn(q e vector )+k a2 q C T vector(2, 1) sgn(q C T scalar) (11) k a1 is a constant defining the switching surface. q e vector in eq. (11) is a quaternion representing the error angle θ LOS between LOS vector of the onboard camera r LOS and relative position vector beween target and chaser r C as shown in fig. 9. Since the major purpose for using q e vector as feedback signal for attitude control is to minimize θ LOS, there still remains the attitude error along the LOS axis. In order to make it small, qt C vector(2, 1) which represents attitude error between {T } and {C} along r LOS is also used as a feedback signal for defining switching surface. The process for calculating q e from ˆr C, C rlosisshownineqs.(12)-(15). C ˆr C C is given from motion estimation using image data explained in previous section. Ã! θlos q e vector = λ sin (12) 2 10
11 Figure 9: Target, chaser, frames, vectors Figure 10: Time sequence of the proximity maneouvre q e scalar = cos Ã! θlos 2 (13) ω C e λ = ˆrC C rc LOS ˆr C C rlos C Ã θ LOS = arccos ˆrC C r C LOS ˆr C C r C LOS in eq. (11) is defined below. ωc C is from Gyros on chaser. As is the case of the position control, ˆDC T and ˆω IT T are all estimated from motion estimation utilizing image data.! (14) (15) ω C e = ωc C ˆD C T ˆωT IT (16) 3.3 Numerical simulation The time sequence of the proximity maneouvre is shown in fig.10 and table1. It is a proximity maneouvre after rendezvous to the distance of approx.18 [m]. After the station keeping phase, it moves up for 10 [m] then moves back to the original position afterwards. During this maneouvre the chaser controls its position so that its own center of mass to be coincident with the desired position in the target fixed frame {T } and controls its attitude so that the LOS of its onboard camera points to the mass center of the target. It is assumed that the target is not doing attitude motion. Images of the target was generated by CG(Computer Graphics) and these are processed by motion estimation algorithm (Stereo vision + ICP + 11
12 Table 1: Time table of the proximity maneouvre Table 2: Specifications of target(left) and chaser(right) extended Kalman Filter). It is assumed that the direction of sunlight is in Y T axis direction so that it could be suitable for image capturing. Table2 shows specificatins of target and chaser. The chaser is designed as a micro satellite with the mass of 3.59 [kg] and it has three wheels for attitude control and six thrusters for position control. Fig show the result of the simulation. It can be seen that the position of the chaser is controlled with error of approx. ±3[m] ineachdirection. (seefigs.11and12)asisseen from fig. 12,itisspeculatedthattheerrorismainly due to the estimated relative position error in r e caused by attitude control measurement and control error in ˆD C T. Fig.13 shows the position of the center of the target in the FOV(Fiels Of View) of the chaser onboard camera. It is controlled by the attitude controller to be within the area of FOV where motion estimation from image is possible. Fig.14 shows some examples of CG images of the left camera of binocular vision and result of motion estimation algorithm. Blue points in the lower row shows measured points using stereo vision and red points shows model 3D points after the matching by ICP(Iterative Closest Point) algorithm. The column at 5 [sec] shows the initial state, the column at 450 [sec] shows the intermediate position moving from [0, 18, 0][m] to[0, 18, 10][m], the column at 700 [sec] is at [0, 18, 10][m], the column at 750 [sec] is at the intermidiate position going back to the initial position from [0, 18, 10][m] and the collumn at 1000 [sec] is at the initial position. The image is captured every 5 [sec] and processed for motion estimation. It can be seen that blue red points was matched to the blue points successfully generating relative position and attitude. 12
13 Figure 11: 3D position of chaser Figure 12: 3D position of chaser : X T, Y T, Z T 13
14 Figure 13: position of target in FOV Figure 14: CG+measured model points 14
15 4 Concluding remarks and future work Motion estimation of a large space debris object using image data was performed by applying Stereo Vision, the ICP(Iterative Closest Point) algorithm using a measured data point set and a model data point set, and extended Kalman Filter. Three-axis attitude motion and position were estimated in a terrestrial experiment using an On-orbit Visual Environment Simulator. Six DOF(Degrees Of Freedom) manoeuvre simulation with motion estimation based on CG(Computer Graphics) applying developed algorithm was succefully tried for the proximity flight around a failed satellite. More challenging six DOF manoeuvre simulation such as for the chaser following nutating satellite would be the next goal of this research. In addition to that, hardware In the Loop simulation replacing the CG image by images taken in the On-orbit Visual Environment Simulator (fig. 3 : bottom-right) is now under preparation. References [1] A. Cropp and P. Palmer. Pose estimation and relative orbit determination of a nearby target microsatellite using passive imagery. 5th Cranfield Conference on Dynamics and Control of Systems and Structures in Space 2002, pages , [2]B.K.P.Horn. Closed-formsolutionofabsolute orientation using unit quaternions. Journal of Optical Society of America, 4-4: , [3] P. J.Besl and N. D.McKay. A method for registration of 3-d shapes. IEEE Transactions of Pattern Analysis and Machine Intelligence, 14-2: , [4] M. D. Lichter and S. Dubowsky. Estimation of state, shape, and inertial parameters of space objects from sequences of range images. Proc. SPIE Vol : Intelligent Robots and Computer Vision XXI:Algorithms, Techniques, and Active Vision, D. P. Casasent, ed., pages , [5] P. Jasiobedzki M. Abraham and M. Umasuthan. Robust 3d vision for autonomous space robotic operation. Proceedings of the 6th International Symposium on Artificial Intelligence and Robotics & Automation in Space : i-sairas, June 18-22, [6] S. Nishida. On-orbit servicing and assembly : Japanese perspective and more. Proceedings of 24th International Symposium on Space Technology and Science, Miyazaki, JAPAN, [7] F. Terui. Position and attitude control of a spacecraft by sliding mode control. Proceedings of the American Control Conference, pages , [8] F. Terui. Relative motion estimation and control to a failed satellite by machine vision. Space Technology, 27:90 96, BACK TO SESSION DETAILED CONTENTS 15 BACK TO HIGHER LEVEL CONTENTS
Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses
Aerospace Information Technology s for Internships and Bachelor s and Master s Theses Version Nov. 2014 The Chair of Aerospace Information Technology addresses several research topics in the area of: Avionic
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems
STEREO Guidance & Control
STEREO Guidance & Control J. Courtney Ray [email protected] J. C. Ray 98/11/19 1 STEREO G&C Requirements Baseline System Software Some Analysis J. C. Ray 98/11/19 2 G&C Requirements - Drivers Spacecraft
THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY
THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY Dr. T. Clarke & Dr. X. Wang Optical Metrology Centre, City University, Northampton Square, London, EC1V 0HB, UK [email protected], [email protected]
Force/position control of a robotic system for transcranial magnetic stimulation
Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme
DEOS. Deutsche Orbitale Servicing Mission. The In-flight Technology Demonstration of Germany s Robotics Approach to Service Satellites
DEOS Deutsche Orbitale Servicing Mission The In-flight Technology Demonstration of Germany s Robotics Approach to Service Satellites B. Sommer, K. Landzettel, T. Wolf, D. Reintsema, German Aerospace Center
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Manfred Gruber Safe and Autonomous Systems
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Industrial Robotics. Training Objective
Training Objective After watching the program and reviewing this printed material, the viewer will learn the basics of industrial robot technology and how robots are used in a variety of manufacturing
Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
Chapter 2. Mission Analysis. 2.1 Mission Geometry
Chapter 2 Mission Analysis As noted in Chapter 1, orbital and attitude dynamics must be considered as coupled. That is to say, the orbital motion of a spacecraft affects the attitude motion, and the attitude
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
Vectors VECTOR PRODUCT. Graham S McDonald. A Tutorial Module for learning about the vector product of two vectors. Table of contents Begin Tutorial
Vectors VECTOR PRODUCT Graham S McDonald A Tutorial Module for learning about the vector product of two vectors Table of contents Begin Tutorial c 2004 [email protected] 1. Theory 2. Exercises
Projection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
Resolution Enhancement of Photogrammetric Digital Images
DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia 1 Resolution Enhancement of Photogrammetric Digital Images John G. FRYER and Gabriele SCARMANA
Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.
Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
Automotive Applications of 3D Laser Scanning Introduction
Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
Laser Ranging to Nano-Satellites
13-0222 Laser Ranging to Nano-Satellites G. Kirchner (1), Ludwig Grunwaldt (2), Reinhard Neubert (2), Franz Koidl (1), Merlin Barschke (3), Zizung Yoon (3), Hauke Fiedler (4), Christine Hollenstein (5)
Robotic Pre-Cursor Contribution to Human NEA Mission. H. Kuninaka JSPEC/JAXA
Robotic Pre-Cursor Contribution to Human NEA Mission H. Kuninaka JSPEC/JAXA Asteroid Explorer Hayabusa Dimensions 1.0m x 1.6m x 1.1m Weight : 380kg(Dry) Chemical Fuel 70kg Xe Propellant 60kg Total 510kg
Onboard electronics of UAVs
AARMS Vol. 5, No. 2 (2006) 237 243 TECHNOLOGY Onboard electronics of UAVs ANTAL TURÓCZI, IMRE MAKKAY Department of Electronic Warfare, Miklós Zrínyi National Defence University, Budapest, Hungary Recent
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
TOPO Trajectory Operations Officer
ISS Live! was developed at NASA s Johnson Space Center (JSC) under NASA Contracts NNJ14RA02C and NNJ11HA14C wherein the U.S. Government retains certain rights. Console Handbook TOPO Trajectory Operations
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
ACTUATOR DESIGN FOR ARC WELDING ROBOT
ACTUATOR DESIGN FOR ARC WELDING ROBOT 1 Anurag Verma, 2 M. M. Gor* 1 G.H Patel College of Engineering & Technology, V.V.Nagar-388120, Gujarat, India 2 Parul Institute of Engineering & Technology, Limda-391760,
Crater detection with segmentation-based image processing algorithm
Template reference : 100181708K-EN Crater detection with segmentation-based image processing algorithm M. Spigai, S. Clerc (Thales Alenia Space-France) V. Simard-Bilodeau (U. Sherbrooke and NGC Aerospace,
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Autonomous Mobile Robot-I
Autonomous Mobile Robot-I Sabastian, S.E and Ang, M. H. Jr. Department of Mechanical Engineering National University of Singapore 21 Lower Kent Ridge Road, Singapore 119077 ABSTRACT This report illustrates
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
A class-structured software development platform for on-board computers of small satellites
A class-structured software development platform for on-board computers of small satellites Takaichi Kamijo*, Yuhei Aoki*, Sotaro Kobayashi*, Shinichi Kimura* *Department of Electrical Engineering, Tokyo
Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology
Basic Principles of Inertial Navigation Seminar on inertial navigation systems Tampere University of Technology 1 The five basic forms of navigation Pilotage, which essentially relies on recognizing landmarks
Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801
S7a1801 OVERVIEW In definiti 3D theaters, the audience wears special lightweight glasses to see the world projected onto the giant dome screen with real depth perception called 3D stereo. The effect allows
Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
Collision Prevention and Area Monitoring with the LMS Laser Measurement System
Collision Prevention and Area Monitoring with the LMS Laser Measurement System PDF processed with CutePDF evaluation edition www.cutepdf.com A v o i d...... collisions SICK Laser Measurement Systems are
Introduction. C 2009 John Wiley & Sons, Ltd
1 Introduction The purpose of this text on stereo-based imaging is twofold: it is to give students of computer vision a thorough grounding in the image analysis and projective geometry techniques relevant
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau
Introduction to Computer Graphics Marie-Paule Cani & Estelle Duveau 04/02 Introduction & projective rendering 11/02 Prodedural modeling, Interactive modeling with parametric surfaces 25/02 Introduction
Removing Moving Objects from Point Cloud Scenes
1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky [email protected] Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research
JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions
Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai
MODELLING A SATELLITE CONTROL SYSTEM SIMULATOR
National nstitute for Space Research NPE Space Mechanics and Control Division DMC São José dos Campos, SP, Brasil MODELLNG A SATELLTE CONTROL SYSTEM SMULATOR Luiz C Gadelha Souza [email protected] rd
Origins of the Unusual Space Shuttle Quaternion Definition
47th AIAA Aerospace Sciences Meeting Including The New Horizons Forum and Aerospace Exposition 5-8 January 2009, Orlando, Florida AIAA 2009-43 Origins of the Unusual Space Shuttle Quaternion Definition
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
Poker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
Lecture 7. Matthew T. Mason. Mechanics of Manipulation. Lecture 7. Representing Rotation. Kinematic representation: goals, overview
Matthew T. Mason Mechanics of Manipulation Today s outline Readings, etc. We are starting chapter 3 of the text Lots of stuff online on representing rotations Murray, Li, and Sastry for matrix exponential
RiMONITOR. Monitoring Software. for RIEGL VZ-Line Laser Scanners. Ri Software. visit our website www.riegl.com. Preliminary Data Sheet
Monitoring Software RiMONITOR for RIEGL VZ-Line Laser Scanners for stand-alone monitoring applications by autonomous operation of all RIEGL VZ-Line Laser Scanners adaptable configuration of data acquisition
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Automated Process for Generating Digitised Maps through GPS Data Compression
Automated Process for Generating Digitised Maps through GPS Data Compression Stewart Worrall and Eduardo Nebot University of Sydney, Australia {s.worrall, e.nebot}@acfr.usyd.edu.au Abstract This paper
Optical Tracking Using Projective Invariant Marker Pattern Properties
Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands
Realization of a UV fisheye hyperspectral camera
Realization of a UV fisheye hyperspectral camera Valentina Caricato, Andrea Egidi, Marco Pisani and Massimo Zucco, INRIM Outline Purpose of the instrument Required specs Hyperspectral technique Optical
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam
Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam INSTRUCTIONS: Use a pencil #2 to fill your scantron. Write your code number and bubble it in under "EXAM NUMBER;" an entry
CS 534: Computer Vision 3D Model-based recognition
CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
Intelligent Flexible Automation
Intelligent Flexible Automation David Peters Chief Executive Officer Universal Robotics February 20-22, 2013 Orlando World Marriott Center Orlando, Florida USA Trends in AI and Computing Power Convergence
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
Introduction to Robotics Analysis, Systems, Applications
Introduction to Robotics Analysis, Systems, Applications Saeed B. Niku Mechanical Engineering Department California Polytechnic State University San Luis Obispo Technische Urw/carsMt Darmstadt FACHBEREfCH
SATELLITE IMAGES IN ENVIRONMENTAL DATA PROCESSING
SATELLITE IMAGES IN ENVIRONMENTAL DATA PROCESSING Magdaléna Kolínová Aleš Procházka Martin Slavík Prague Institute of Chemical Technology Department of Computing and Control Engineering Technická 95, 66
An Algorithm for Classification of Five Types of Defects on Bare Printed Circuit Board
IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 3, July 2011 CSES International 2011 ISSN 0973-4406 An Algorithm for Classification of Five Types of Defects on Bare
The Concept(s) of Mosaic Image Processing. by Fabian Neyer
The Concept(s) of Mosaic Image Processing by Fabian Neyer NEAIC 2012 April 27, 2012 Suffern, NY My Background 2003, ST8, AP130 2005, EOS 300D, Borg101 2 2006, mod. EOS20D, Borg101 2010/11, STL11k, Borg101
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
Micro-CT for SEM Non-destructive Measurement and Volume Visualization of Specimens Internal Microstructure in SEM Micro-CT Innovation with Integrity
Micro-CT for SEM Non-destructive Measurement and Volume Visualization of Specimens Internal Microstructure in SEM Innovation with Integrity Micro-CT 3D Microscopy Using Micro-CT for SEM Micro-CT for SEM
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
High-resolution Imaging System for Omnidirectional Illuminant Estimation
High-resolution Imaging System for Omnidirectional Illuminant Estimation Shoji Tominaga*, Tsuyoshi Fukuda**, and Akira Kimachi** *Graduate School of Advanced Integration Science, Chiba University, Chiba
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected]
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected] Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
DEOS The German Robotics Approach to Secure and De-Orbit Malfunctioned Satellites from Low Earth Orbits
DEOS The German Robotics Approach to Secure and De-Orbit Malfunctioned Satellites from Low Earth Orbits D. Reintsema*, J. Thaeter**, A. Rathke***, W. Naumann****, P. Rank*****, J. Sommer****** *General
Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research
20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
PRELIMINARY DESIGN REVIEW
STUDENTS SPACE ASSOCIATION THE FACULTY OF POWER AND AERONAUTICAL ENGINEERING WARSAW UNIVERSITY OF TECHNOLOGY PRELIMINARY DESIGN REVIEW CAMERAS August 2015 Abstract The following document is a part of the
CHAPTER 2 ORBITAL DYNAMICS
14 CHAPTER 2 ORBITAL DYNAMICS 2.1 INTRODUCTION This chapter presents definitions of coordinate systems that are used in the satellite, brief description about satellite equations of motion and relative
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia [email protected]
Automated part positioning with the laser tracker
Automated part positioning with the laser tracker S. Kyle, R. Loser, D. Warren Leica Abstract Improvements and new developments for Leica's laser tracker make it suitable for measuring the relative position
CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras
1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the
COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies
COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su ([email protected]) TA: Matt Menke
By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate
AUTOMATED VEHICLE CONTROL SYSTEM By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate Third Year Information Technology Engineering V.E.S.I.T.
Power Electronics. Prof. K. Gopakumar. Centre for Electronics Design and Technology. Indian Institute of Science, Bangalore.
Power Electronics Prof. K. Gopakumar Centre for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture - 1 Electric Drive Today, we will start with the topic on industrial drive
KINEMATICS OF PARTICLES RELATIVE MOTION WITH RESPECT TO TRANSLATING AXES
KINEMTICS OF PRTICLES RELTIVE MOTION WITH RESPECT TO TRNSLTING XES In the previous articles, we have described particle motion using coordinates with respect to fixed reference axes. The displacements,
Integration Services
Integration Services EXPERIENCED TEAM ADVANCED TECHNOLOGY PROVEN SOLUTIONS Integrations for large scale metrology applications Metris metrology to streamline your CAPABILITIES Advanced systems design Engineering
SIX DEGREE-OF-FREEDOM MODELING OF AN UNINHABITED AERIAL VEHICLE. A thesis presented to. the faculty of
SIX DEGREE-OF-FREEDOM MODELING OF AN UNINHABITED AERIAL VEHICLE A thesis presented to the faculty of the Russ College of Engineering and Technology of Ohio University In partial fulfillment of the requirement
