VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION

Size: px
Start display at page:

Download "VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION"

Transcription

1 VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION MASTER THESIS, SCHOOL OF ENGINEERS, UNIVERSITY OF SEVILLE Author Alejandro Suárez Fernández-Miranda Supervising Teachers Dr. Guillermo Heredia Benot Dr. Aníbal Ollero Baturone

2 1

3 A strong man doesn't need to read the future, he makes his own. Solid Snake - Metal Gear Solid 2

4 3

5 Table of Content 1. INTRODUCTION Introduction General description of the project Related works Vision-based position estimation Visual tracking algorithms FDIR Quadrotor dynamic modeling and control Development time estimation Vision-based position estimation in multiple quadrotor systems Problem description Model of the system Position estimation algorithm Visual tracking algorithm Experimental results Software implementation Description of the experiments Analysis of the results Fixed quadrotor with parallel cameras Fixed quadrotor with orthogonal camera configuration Fixed quadrotor with moving cameras Fixed quadrotor with orthogonal camera configuration and tracking loss Z-axis estimation with flying quadrotor and parallel camera configuration Depth and lateral motion in the XY plane Quadrotor executing circular and random trajectories Accuracy of the estimation Summary Evolution in the development of the vision-based position estimation system Application of virtual sensor to Fault Detection and Identification Introduction Additive positioning sensor fault Lock-in-place fault

6 3.4. Criterion for virtual sensor rejection Threshold function Simulation of perturbations in the virtual sensor over quadrotor trajectory control Introduction Quadrotor trajectory control Dynamic model Attitude and height control Velocity control Trajectory generation Model of perturbations Sampling rate Delay Noise Outliers Packet loss Simulation results Different speeds and delays Noise and outliers with fixed delay Delay, noise, outliers and packet loss Conclussions REFERENCES

7 List of Figures Figure 1. Estimated percentage of the development time for each of the phases of the project Figure 2. Gantt diagram with the evolution of the project Figure 3. Two quadrotors with cameras in their base tracking a third quadrotor whose position want to be estimated, represented by the green ball Figure 4. Images taken during data acquisition experiments at the same time from both cameras, with two orange balls at the top of a Hummingbird quadrotor Figure 5. Relative position vectors between the cameras and the tracked quadrotor Figure 6. Pin-hole camera model... 2 Figure 7. State machine implemented in the modified versión of the CAMShift algorithm Figure 8. Orange rubber hat at the top of the Hummingbird quadrotor used as visual marker 28 Figure 9. Camera configuration with parallel optical axes in the Y-axis of the global frame and fixed quadrotor Figure 1. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and parallel optical axes. Blue marks * correspond to instants with tracking loss from one of the cameras... 3 Figure 11. Orthogonal configuration of the cameras... 3 Figure 12. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and orthogonal configuration of the cameras Figure 13. Estimation error and distance with cameras with fixed quadrotor and moving cameras, initially with parallel optical axes and finally with orthogonal configuration Figure 14. Evolution of the position estimation error with multiple tracking losses (marked by a * character) in one of the cameras Figure 15. Orthogonal camera configuration with tracked quadrotor out of the FoV for one of the cameras Figure 16. Position estimation error and distance with cameras with long duration tracking loss for one of the cameras (blue and green * characters) and both cameras (red * characters) Figure 17. Number of consecutive frames with tracking loss (blue) and threshold (red) Figure 18. Configuration of the cameras and the quadrotor for the Z-axis estimation experiment Figure 19. Vicon height measurement (red) and vision-based estimation (blue) Figure 2. Position error estimation in XYZ (blue, green, red) and distance between each of the cameras and the tracked quadrotor (magenta, black) Figure 21. Configuration of the cameras for the experiment with depth (YE axis) and lateral motion (XE axis) Figure 22. X-axis estimation with depth and lateral motion for the quadrotor Figure 23. Y-axis estimation with depth and lateral motion for the quadrotor

8 Figure 24. Number of consecutive frames with tracking loss with depth and lateral motion for the quadrotor Figure 25. Configuration of the cameras with the quadrotor executing circular and random trajectories... 4 Figure 26. X-axis estimation and real position when the quadrotor is executing circular and random trajectories... 4 Figure 27. Y-axis estimation and real position when the quadrotor is executing circular and random trajectories Figure 28. Number of consecutive frames with tracking loss when the quadrotor is executing circular and random trajectories Figure 29. Simulation of GPS data with drift error between t = 2 s and t = 3 s Figure 3. Distance between position given by GPS and vision-based estimation with GPS drift error Figure 31. GPS simulated data with fixed measurement from t = 11 s Figure 32. Distance between vision-based position estimation and GPS with faulty data (black), and threshold (red)... 5 Figure 33. Number of consecutive frames with tracking loss and threshold for rejecting virtual sensor estimation Figure 34. Angle and distance between the camera and the tracked object Figure 35. Estimation error and distance with cameras with the cameras changing from parallel to orthogonal configuration Figure 36. Distance between GPS simulated data and estimated position and threshold corresponding to Figure Figure 37. Two quadrotors with cameras in their base tracking a third quadrotor whose position want to be estimated, represented by the green ball Figure 38. Images taken during data acquisition experiments at the same time from both cameras, with two orange balls at the top of a Hummingbird quadrotor Figure 39. Reference path and trajectories followed in XY plane with different values of delay in XY position measurement, fixed delay of 1 ms in height measurement and V =,5 m s Figure 4. External estimation of XY position with Gaussian noise, outliers and a reference speed of V =,5 m s Figure 41. Trajectories followed by the quadrotor with noise and outliers (blue) and without them (black) Figure 42. Quadrotor trajectories with simultaneous application of noise, delay, outliers and packet loss for V =,5 m s -1 (blue) and V =,75 m s -1 (green) Figure 43. Reference and real value for height when XY position is affected by noise, delay, outliers and packet loss with a reference speed of V =,75 m s

9 Acknowledgments This work was partially funded by the European Commission under the FP7 Integrated Project EC-SAFEMOBIL (FP ) and the CLEAR Project (DPI C2-1) funded by the Ministerio de Ciencia e Innovacion of the Spanish Government. The author wishes to acknowledge the support received by the CATEC during the experiments carried out in its testbed. Special thanks to Miguel Ángel Trujillo and Jonathan Ruiz from CATEC, and to professors José Ramiro Martínez de Dios and Begoña C. Arrúe Ullés from the University of Seville, for their help. Finally, the author wants to remark all the help and advice provided by the supervisors Guillermo Heredia Benot and Aníbal Ollero Baturone. 8

10 Publications Accepted papers Suárez, A., Heredia, G., Ollero, A.: Analysis of Perturbations in Trajectory Control Using Visual Estimation in Multiple Quadrotor Systems. First Iberian Robotics Conference (213) Awaiting acceptance papers Suárez, A., Heredia, G., Martínez-de-Dios, R., Trujillo, M.A., Ollero, A.: Cooperative Vision-Based Virtual Sensor for MultiUAV Fault Detection. International Conference on Robotics and Automation (214) 9

11 1. INTRODUCTION 1.1. Introduction Position estimation is an important issue in many mobile robotics applications, where automatic position or trajectory control is required. This problem can be found in very different scenarios, including both terrestrial and aerial robots, with different specifications in accuracy, reliability, cost, weight, size or computational resources. Two main approaches can be considered: position estimation based on odometry, beacons or any other internal sensors, or using a global positioning system such as GPS or Galileo. Each of these technologies has its advantages and disadvantages. The selection of a specific device will depend on the particular application, namely, the specifications in the operation conditions of the robot. For example, it is well known that GPS sensors only work with satellite visibility, so they cannot operate in indoors, but they are extensively used in fixed-wing UAVs and other outdoor exploration vehicles. Another drawback of these devices is their low accuracy, with position errors around two meters, although centimeter accuracies can be obtained with Differential GPS (DGPS). For small indoor wheeled robots, a simple and low cost solution is to use odomety methods, integrating speed or acceleration information obtained from optical encoders or Inertial Measurement Units (IMU). However, the lack of a position reference will cause a drift error along the time, so the estimation might become useless after a few seconds. In recent years, a great effort to solve the Simultaneous Localization and Mapping (SLAM) problem has been dedicated, making possible its application in real time, although it still carries high computational costs. The current trend is to integrate multiple sources of information, fusing their data in order to obtain a better estimation for accuracy and reliability. The sensors can be both static at fixed positions or mounted over mobile robots. Multi-robot systems for instance are platforms were these techniques can be implemented naturally. This work is focused in multi-quadrotor systems with a camera mounted in the base of each UAV, so the position of a certain quadrotor will be obtained from the centroid of its projection over the image planes and the position and orientation of the cameras. Moreover, the Kalman filter used as estimator will also provide the velocity of the vehicle. The external estimation obtained (position, orientation or velocity) can be used for controlling the vehicle. However, some aspects such as estimation errors, delays or estimation availability have to be considered carefully. The effects of new perturbations introduced in the control loop should be analyzed in simulation previous to its application in the real system, so potential accidents causing human or material damages can be avoided. 1

12 1.2. General description of the project The goal of this work is the development of a system for obtaining a position estimation of a quadrotor being visually tracked by two cameras whose position and orientation are known. A simulation study based on data obtained from experiments will be carried for detecting failures on internal sensors, allowing the system reconfiguration to keep the system under control. If the vision-based position estimation provided by the virtual sensor is going to be used in position or trajectory control, it is convenient to study the effects of the associated perturbations (delays, tracking loss, noise, outliers) over the control. Before testing it in real conditions, with the associated risk of accidents and human or material damages, it is preferable to analyze the performance of the controller in simulation. So a simulator of the quadrotor dynamics and its trajectory control system, including the simulation of the identified perturbations, was built. The position and velocity of the tracked object on the 3D space will be obtained from an Extended Kalman Filter (EKF), taking as input the centroid of the object on every image plane of the cameras, as well as their position and orientation. Two visual tracking algorithms were used in this project: the Tracking-Learning-Detection (TLD), and a modified version of the CAMShift algorithm. However, TLD algorithm was rejected due to high computational costs and its bad results applied to the quadrotor tracking, as it is based on template matching. On the other hand, CAMShift algorithm is a color-based tracking algorithm that uses the HSV color space for extracting color information (Hue component) in a single channel image, simplifying the object identification and making it robust to illumination and appearance changes. In multi-uav systems such as formation flight, cooperative surveillance and monitoring or aerial refueling, every robot might carry additional sensors, not for their own control, but for estimating part of the state of other vehicle, for example its position, velocity or orientation. This external estimation can be seen as a virtual sensor, in the sense that it provides a measurement of a certain signal computed from other sensors. In normal conditions, both internal and virtual sensors should provide similar measurements. However, consider a situation with a UAV approaching to an area without satellite visibility, so its GPS sensor is not able to provide position data but the IMU keeps integrating acceleration, increasing error with the time, and then the difference between both sources becomes significant. If a certain threshold is exceeded, the GPS can be considered as faulty, starting a reconfiguration process that handles this situation. For the external position estimation, a communication network is necessary for the interchange of the information used on its computation (time stamp, position and orientation of the cameras, centroid of the tracked object in the image plane). Although this is beyond the scope of this work, communication delays and packet losses should be taken into account when the virtual sensor is going to be used for controlling the UAV. A quadrotor simulator with its trajectory control system has been developed for studying the effects of a number of perturbations identified during experiments, including those related to communications. The simulator was implemented as a MATLAB-Simulink block diagram that includes quadrotor dynamics, attitude, position and trajectory controllers, and a way-point 11

13 generator. Graphical and numerical results are shown in different conditions, highlighting the most important aspects in each case. These results should be used as reference only, as the effects of perturbations over quadrotor performance will depend on the control scheme being used. Finally, all position estimation experiments were performed hand holding the cameras: they were not mounted in the base of any quadrotor. What is more, both cameras used were connected to the same computer through a five-meter cable, what limited the movements around the tracked UAV. In the next step of the project (not considered here), the cameras will be mounted on the quadrotors, and image processing will be done onboard or in a ground control station. The onboard image acquisition and processing introduces additional problems such as vibrations, weight limitations, or available bandwidth Related works The main contribution of this work is the application of the visual position estimation to the Fault Detection and Identification (FDI). However, a number of issues have also been treated, including visual tracking algorithms, quadrotor dynamic modeling and quadrotor control Vision-based position estimation The problem of multi-uav position estimation in the context of forest fire detection has been treated in [1], estimating motion from multiple planar homographies. Accelerometer, gyroscope and visual sensor measurements are combined in [2] using a non-linear complementary filter for estimating pose and linear velocity in an aerial robot. Simultaneously Localization and Mapping (SLAM) problem has been applied to small UAVs in outdoors, in partially structured environments [3]. Quadrotor control using onboard or ground cameras is described in [4] and [5]. Here both position and orientation measurements are computed from the images provided by a pair of cameras. Homography techniques are combined with a Kalman filter in [6] for obtaining UAV position estimation when building mosaics. Other applications where vision-based position estimation can be employed include formation flight and aerial refueling [7], [8], [9], [1] Visual tracking algorithms Visual tracking algorithms with application to position estimation of moving objects have to be fast enough to provide an accurate estimation. As commented earlier, TLD algorithm [11] was tested in the first place due to its ability to adapt to changes in the appearance of the object. However, as this algorithm is based on template matching and the surfaces of the quadrotors 12

14 are not big enough, most of the time the tracking was lost. On the other hand, the execution time was too high due to the high number of operations involved in the correlations with the template list. Color-based tracking algorithms such as CAMShift [12] present good properties for this purpose, including simplicity, low computation time, invariance to changes in the illumination, rotation and position, or noise rejection. A color marker in contrast with the background has to be disposed in the object to be tracked. The problem of this algorithm appears when an object with similar color is in the image, although it can be solved considering additional features. The basic CAMShift assumes that tracked object is always visible on the image, so it cannot handle tracking losses. Some modifications have been done to CAMShift to make possible tracking recovery when object is temporarily occluded, when it changes its appearance or when similarly colored objects are contained in the scene [13]. A Kalman filter is used in [14] for handling occlusions, while a multidimensional color histogram in combination with motion information solve the problem of distinguishing color FDIR Reliability and fault tolerance has always been an important issue in UAVs [2], where Fault Detection and Identification (FDI) techniques play an important role in the efforts to increase the reliability of the systems. It is even more important when teams of aerial vehicles cooperate closely between them and the environment, as it is the case in formation flight and heterogeneous UAV teams, because collisions between them or between the vehicles and objects of the environment may arise. In a team of cooperating autonomous vehicles, FDI of individual vehicles in which they use their own sensors for FDI can be regarded as Component Level (CL) FDI. Most CL-FDI applications to UAVs that appear in the literature use model-based methods, which try to diagnose faults using the redundancy of some mathematical description of the system dynamics. Model-based CL-FDI has been applied to unmanned aircraft, either fixed wing UAVs [21] or helicopter UAVs [22][23][24]. The Team Level (TL) FDI exploits the team information for detection of faults. Most published works rely on transmission of the state of the vehicles through the Communications channel for TL-FDI [25]. What has not been thoroughly explored is the use of the sensors onboard the other vehicles of the team for detection of faults in an autonomous vehicle, which requires sensing the state of a vehicle from the other team components Quadrotor dynamic modeling and control Quadrotor modeling and control has been extensively treated in literature. The derivation of the dynamic model is described in detail in [15]. Some control methods applied in simulation and in real conditions can be found here too. PID, LQ and Backstepping controllers have been tested in [16] and [17] over an indoor micro quadrotor. Mathematical modeling and experimental results in quadrotor trajectory generation and control can be found in [18]. Ref. 13

15 [19] addresses the same problem but the trajectory generation allows the execution of aggressive maneuvers Development time estimation The development of this project can be divided into three phases: Development of the vision-based position estimation system Development of the quadrotor trajectory control simulator Documentation (papers, reports, memories) The estimation of the percentage of time dedicated to each of these phases has been represented in Figure 1. The Gantt diagram with the identified tasks and their start date and end date can be seen in Figure 2. The project started in November 212, with the technical part being finished in June 213. Since then, two papers have been sent to ROBOT 213 congress (accepted) and ICRA 214 (awaiting acceptance), and the project report has been written. Percentage of the Development Time Position estimation Simulation of perturbations Documentation Figure 1. Estimated percentage of the development time for each of the phases of the project 14

16 Figure 2. Gantt diagram with the evolution of the project 15

17 16

18 2. Vision-based position estimation in multiple quadrotor systems 2.1. Problem description Consider a situation with three quadrotors A, B and C. Two of them, A and B, have cameras mounted in their base with known position and orientation referred to a global frame. Images taken from cameras are sent along with their position and orientation to a ground station. Both cameras will try to stay focused on the third quadrotor, C, so a tracking algorithm will be applied to obtain the centroid of the object on every received image. An external position estimator executed in the ground station will use this data to obtain an estimation of quadrotor C position that can be used for position or trajectory control in the case C does not have this kind of sensors, they are damaged, or they are temporarily unavailable. The situation described above has been shown in Figure 3. Here the cones represent the field of view of the cameras, the orange quadrotor is the one being tracked and the green ball corresponds to its position estimation. Figure 3. Two quadrotors with cameras in their base tracking a third quadrotor whose position want to be estimated, represented by the green ball One of the main issues in vision-based position estimation applied to trajectory or position control is the presence of delays in the control loop, which should not be too high to prevent the system of becoming unstable. The following sources of delay can be identified: Image acquisition delay Image transmission through radio link Image processing for tracking algorithm Position estimation and its transmission The first two are imposed by hardware and available bandwidth. The last one is negligible in comparison with the others. On the other hand, image processing is very dependent on the 17

19 computation cost required by the tracking algorithm. In this work, the external position estimation system was developed and tested with real data, obtaining position and orientation of cameras and tracked quadrotor from a Vicon Motion Capture System in the CATEC testbed. The visual tracking algorithm used was a modified version of CAMShift algorithm. This colorbased tracking algorithm uses Hue channel in the HSV image representation for building a model of the object and detecting it, applying Mean-Shift for computing the centroid of the probability distribution. As this algorithm is based only in color information, a small orange ball was disposed at the top of the tracked quadrotor, in contrast with the blue floor of the testbed. Figure 4 shows two images captured by the cameras during data acquisition phase. Figure 4. Images taken during data acquisition experiments at the same time from both cameras, with two orange balls at the top of a Hummingbird quadrotor Although in practical application the external position estimation process will run in real time, here the computations were done off-line in order to make easier the development and debug of this system, so the estimation was carried out in two phases: 1) The data acquisition phase, where images and the measurements of the position and orientation of both cameras and the tracked object were captured along with the time stamp and saved into a file and a directory containing all images. 2) The position estimation phase, corresponding to the execution of the extended Kalman filter that makes use of the captured data to provide an off-line estimation of the quadrotor position at every instant indicated by the time stamp. As normal cameras do not provide depth information (unless other constraints are considered, such as tracked object size), two or more cameras are needed in order to obtain the position of the quadrotor in the three-dimensional space. Even with one camera, if it changes its position and orientation and the tracked object movement is reduced, the position can be estimated. One of the main advantages of using Kalman filter is its ability to integrate multiple sources of information, in the sense that it will try to provide the best estimation independently on the number of observations available at a certain instant. The results of the experiments presented here were obtained with two cameras, although in some cases the tracked object was occluded or out of the field of view (FoV) for one or both cameras. The 18

20 extended Kalman filter equations described later were obtained for two cameras, but they can be easily modified to consider an arbitrary number of cameras Model of the system The system for the vision-based position estimation of a moving object using two cameras is represented in Figure 5. It is assumed for the cameras to be mounted on the quadrotors, but for clarity, they have not been drawn. Figure 5. Relative position vectors between the cameras and the tracked quadrotor The position and orientation of cameras and tracked quadrotor will be referred to fixed frame {E} = {X E, Y E, Z E }. For this problem, P CAM1, P CAM2 and rotation matrixes R E CAM1 and R E CAM2 are known. The following relationships between position vectors are derived: = = + ( 1 ) + The tracking algorithm will provide the centroid of the tracked object. The pin-hole camera model relates the position of the object referred to the camera coordinate system in 3D space with its projection in the image plane. Assuming that the optical axis is X, then: = ; = ( 2 ) where f x and f y are focal length in both axes of the cameras, assumed to be equal for all cameras. Figure 6 represents the pin-hole camera model with the indicated variables. 19

21 f x x CAMn Obj Image Plane Lens Object y CAMn Obj y IMn z CAMn Obj x IMn Image Plane Lens Object f y x CAMn Obj Figure 6. Pin-hole camera model It also will be needed a model of camera lens for compensating typical radial and tangential distortion. A calibration process using chessboard pattern or circles pattern is required for obtaining distortion coefficients. There are two ways for compensate this kind of perturbation: A) Backward compensation: given the centroid of the object in the image plane, the distortion is undone so the ideal projection of the point is obtained. However, it might require numerical approximations if equations are not invertible. B) Forward compensation: position estimator will obtain an estimation of the object centroid, and is here the model of distortion is applied directly. The drawback of this solution is that distortion equations should be considered when computing jacobian matrix, otherwise a slight error must be accepted. The distorted point on the image plane is computed as follows: = (1) (2) = (1 + (1) + (2) + (5) ) + ( 3 ) Here x n = [x,y] T is the normalized image projection (without distortion), r 2 = x 2 + y 2 and and dx is the tangential distortion vector: = 2 (3) + (4) ( + 2 ) (3) ( + 2 ) + 2 (4) ( 4 ) The vector with the distortion coefficients, k c, as well as the focal length and the principal point of the cameras were obtained with the MATLAB camera calibration toolbox. 2

22 2.3. Position estimation algorithm An Extended Kalman Filter (EKF) was used for the position estimation of the tracked object from its centroid in both images and the position and orientation of the cameras. The extended version of the algorithm is used because of the presence of nonlinearities in the rotation matrix and in pin-hole camera model. For EKF application, a nonlinear state space description of the system is considered in the following way: = ( ) + = ( ) + ( 5 ) where x k is the state vector, f( ) is the state evolution function, z k is the measurement vector, h( ) is the output function, and w k and v k are Gaussian noise processes. State vector will contain position and velocity of the tracked UAV referred to fixed frame {E}, while measurement vector will contain the centroid of the object in both images given by the tracking algorithm in current instant, but also in previous one (this is done for taking into account velocity when updating estimation). These two vectors are then given by: =,,,,, =,,,,,,, ( 6 ) If no other information source can be used, a linear motion model is assumed, so system evolution function will be: + + = = + ( 7 ) Here t is the elapsed time between consecutives updates. If acceleration of the tracked quadrotor can be obtained from its internal sensors or computed from orientation, this information would be integrated in the last three terms of the system evolution function: + + = = ( 8 ) 21

23 On the other hand, it is necessary to relate the measurable variables (centroid of the tracked object on every image plane) with the state variables. This can be done through the equations of the model: ( ) + ( ) + ( ) = ( ) + ( ) + ( ) ( ) + ( ) + ( ) ( ) + ( ) + ( ) ( 9 ) Here r n ij is the ij element of the rotation matrix for the n-th camera. For computing at instant k the centroid of the object at instant k-1, we make use of the following equation: = ( 1 ) Then, an equivalent expression to (9) is obtained. Jacobian matrixes J f and J h used in EKF equations can be easily obtaining from these two expressions. However, only the matrix corresponding to the state evolution function is reproduced here due to space limitations: 1 = ( 11 ) Now let consider the general position estimation problem with N cameras. The state vector is the same as defined in (7), however, for practical reasons, the measurement vector will only contain the centroid of the tracked object at instants k and k-1 for one camera each time: =,,, ( 12 ) The vector z n k represents the measurements of camera n-th at iteration k. As the number of cameras increases, the assumption of simultaneous image acquisition might not be valid, and a model with different image acquisition times is preferred instead. This requires a synchronization process with a global time stamp indicating when the image was taken. Let define the following variables: t : time instant of last estimation update t n acq: time instant when image of camera n-th was captured 22

24 If t m acq is the time stamp of the last image captured, then: = ( 13 ) = The rest of computations are the same as described for the case with two cameras Visual tracking algorithm Visual tracking applied in the position estimation and control imposes hard restrictions in computational time, in the sense that delays in position measurements affect significantly the performance of the trajectory control, limiting the speed of the vehicle to prevent it of becoming unstable. However, vision-based tracking algorithms must include other properties like: Robustness to light conditions Noise immunity Ability to support changes in the orientation of the tracked object Low memory requirements, what usually also implies low computation time Capable to recover from temporal losses (occlusions, object out of FoV) Support image blurring due to camera motion Applicable with moving cameras It must be taken into account that quadrotors have a small surface, and its projection in the image can be very changing due to its X shape. On the other hand, using color markers do not affect the quadrotor control, but they simplify the visual detection task. In this research, we tested two tracking algorithms: TLD (Tracking-Learning-Detection) and a modified version of CAM-Shift. The first one builds a model of the tracked object while it is on execution, adding a new template to the list when a significant difference between current observation and model is detected. For smooth operation, TLD needs the tracked object to have significant edges and contours since detection is made through a correlation between templates and image patches with different position and scales. This makes its use for quadrotor tracking difficult due to the small surface and uniformity in the quadrotor shape. Experimental results show the following problems in the application of this algorithm: Significant error in centroid estimation Computation time is relatively high Too many false-positives are found 23

25 The bounding box around the object tends to diverge Tracked object is usually lost when it comes far from camera Tracked object is lost when background is not uniform Better results were obtained with a modified version of CAMShift algorithm (Continuously Adaptive Mean-Shift). CAMShift is a color-based tracking algorithm, so a color marker is required to be placed in a visible part of the quadrotor in contrast with the background color. In tests, we put two small orange balls at the top of the quadrotor, while the floor of the testbed was blue. The tracked object (the two orange balls, not the quadrotor) is represented by a histogram of the hue component containing the color distribution of the object. Here the HSV (Hue-Saturation-Value) image representation is used instead of RGB color space. This representation allows the extraction of color information and its treatment as a onedimensional magnitude, so histogram based techniques can be applied. Saturation (color density) and Value (brightness) are limited in order to reject noise and other perturbations. For every image received, the CAMShift algorithm computes a probability image weighting Hue component of every pixel with the color distribution histogram of the tracked object, so the pixels with color closer to the object will have higher probabilities. Mean-Shift algorithm is then applied to obtain the maximum of the image probability in an iterative process. The process computes the centroid of the probability distribution within a window that will slide in the direction of the maximum until its center converges. Denoting the probability image by P(x,y), then the centroid of the distribution inside searching window will be given by: = ; = ( 14 ) where M, M 1 and M 1 are zero and first order moments of image probability computed as follows: = (, ) ; = (, ) ; = (, ) ( 15 ) CAMShift algorithm will return an orientated ellipse around the tracked object, whose dimensions and orientation will be obtained from second order moments. The basic implementation of CAMShift assumes that there is a nonzero probability in the image at all times. However, if tracked object is temporarily lost from image due to occlusions or because it is out of the field of view, the algorithm must be able to detect tracked object loss and redetect it, so tracking can be reset once object is visible again. This can be seen as a two state machine, as shown in Figure 7: 24

26 Tracked Object Lost Tracking Detecting Tracked Object Found Figure 7. State machine implemented in the modified versión of the CAMShift algorithm For detecting object lost and for object redetection, a number of criterions can be used: The zero order moment, as a estimation of the object size The size, dimensions or aspect ratio of the bounding box around the object One or more templates of the near surrounding of the color marker that include the quadrotor A vector of features obtained from SURF, SIFT or similar algorithms For detection process, the incoming image will be divided into a set of patches, and for every patch, a measurement of the probability for the tracked object to be contained there is computed, and then, the detector will return the position of the most probable patch. If this path is a false positive, it will be rejected by the object loss detector, and a new detection process will begin. Otherwise, CAMShift will use this patch as initial search window. Experimental results made us conclude: CAMShift is about 5-1 times faster than TLD Modified version of CAMShift can recover in short time from object lost when this is visible again False-positive are rejected by the object loss detector CAMShift can follow objects further than TLD CAMShift requires much less computation resources than TLD 2.5. Experimental results This section presents graphical and numerical results of vision-based position estimation using the algorithms explained above. Previously, it is described the software developed specifically for this experiments, as well as the conditions, equipment and personal involved Software implementation 25

27 Three software modules were developed to support the experiments of vision-based position estimation. These experiments were divided into two phases: the data acquisition phase, and the data analysis phase. Real-time estimation experiments have not been done. Data acquisition module: this program was written in C++ for both Ubuntu 12.4 and Windows 8 Operative Systems using Eclipse Juno IDE and Microsoft Visual Studio Express 212, respectively. It makes use of Open CV (Open Computer Vision) library, as well as Vicon Data Stream SDK library. The program contains a main loop where images of the two cameras are captured and saved as individual files along with the measurements of the position and orientation of both cameras and the tracked object given by a Vicon Motion Capture System. These measurements are saved in a text file with their corresponding time stamp. Images and measurements are assumed to be captured at the same time for the estimation process, although in practical, these data are obtained sequentially, so there is a slight delay. Previously to the data acquisition loop, the user must specify the resolution of the cameras and the folder name where images and Vicon measurements are stored. Tracking and position estimation module: this program was also implemented in C++ for Ubuntu 12.4 and Windows 8, using Open CV and ROS (Robot Operating System) libraries. It was designed to accept data in real time but also data captured by the acquisition program. It has not been tested in real time. Until now, it has only been used for the off-line position estimation. The program contains a main loop with the execution of the tracking algorithm and the extended Kalman filter. It also performs the same functions as the data acquisition program, taking sequentially images from cameras as well as position and orientation measurements from Vicon. The modified version of the CAMShift algorithm returns the centroid of the tracked quadrotor for every image in both cameras. Then, the position estimation is updated with this information and visualized with the rviz application from ROS. It was found that using ROS and Vicon Data Stream libraries simultaneously causes an execution error that was reported by other users. The tracking and position estimation program is not complete jet. The selection of the tracked object is done manually drawing a rectangle around it. The modified CAMShift provides good results taking into account the fast movement of the quadrotor and the blurring of the images in some experiments, but in a number of situations it returns false positives when the tracked object is out of the field of view and there is an object with a similar color within the image. On the other hand, Kalman filter tuning takes too much time when performed with the tracking algorithm. However, the position estimation is computed from the position and orientation measurements and from the centroid of the tracked object on the image plane for both cameras, so images are no longer necessary if the centroids have already been obtained by the tracking algorithm. Position estimation module: the position estimation algorithm was implemented in a MATLAB script in order to make easier and faster the Kalman filter setting. It takes as input the position and orientation measurements of the cameras and the quadrotor 26

28 (used as ground truth), the time stamp, the centroid of the tracked object given by CAMShift, and a flag indicating for every camera if the tracking is lost in current frame. As output, the estimator provides the position and velocity of the quadrotor in the global coordinate system. In the data acquisition phase, the real position of the quadrotor was also recorded, making possible the computation of the estimation error. This magnitude, distance between tracked quadrotor and cameras, tracking loss flag and other signals are represented graphically for better analysis of the results Description of the experiments The data acquisition experiments were carried out in the CATEC testbed using its Vicon Motion Capture System for obtaining position and orientation of the cameras and the tracked quadrotor. The acquisition program was executed in a workstation provided by CATEC or in a laptop provided by the University of Seville. Two USB cameras Logitech C525 were connected to the computer through five-meter USB cables. This limited the mobility of the cameras during the experiments when following the quadrotor. The cameras were mounted over independent bases whose position was measured by Vicon. The optical axis of the cameras corresponded to the X axis of the base. It is important for the estimation that both axes are parallel. Otherwise an estimation error proportional to the distance between the cameras and the quadrotor is derived. The tracked object was a Hummingbird quadrotor. Two orange balls or a little rubber hat were disposed at the top of the UAV as visual marker in contrast with the blue floor of the testbed, as shown in Figure 8. The cameras tried to stay focused in this marker. One important aspect referred to the cameras is the autofocus. For data acquisition experiments, two webcams models were used: the Genius eface 225 (manually adjustable focus) and the Logitech C525 (autofocus). For applications with moving objects, cameras with fixed focus or manually adjustable are not recommended. On the other hand, the image quality in the case of the Logitech C525 was much better that with the Genius eface

29 Figure 8. Orange rubber hat at the top of the Hummingbird quadrotor used as visual marker The experiments were carried out by three or four persons: - The pilot of the quadrotor - The person in charge of the data acquisition program - Two persons for handing the cameras At the beginning of each experiment, the coordinator indicates to the pilot and the responsible for the cameras the position and motion pattern to be executed, according to the planning defined previously. Then, the resolution of the images and the name of the folder where Vicon data and acquired images will be saved are specified. Each experiment took between 2 and 5 minutes. The total number of images acquired was around 4,. The initial set up and the execution of the experiments were carried out in four hours Analysis of the results The position estimation results are presented here in different conditions explained separately. The experiments were designed to consider a wide range of situations and configurations, with different resolution of the cameras. Graphics corresponding to estimation error also represent the distance between each of the cameras and the quadrotor for magnitude comparison. Typically, the estimation error in position is around.15 m for a mean distance of 5 m from cameras, although, as it will be seen later, the error will strongly depend on the relative position between cameras and quadrotor. The effect of the tracking loss has been represented with a blue or green character * for one of the cameras, and with a red character * if tracking is lost for both cameras. 28

30 Fixed quadrotor with parallel cameras In this experiment, the quadrotor was fixed at the floor. The optical axes of the cameras were in parallel, with a base line around 1,5 meters. The situation is the one described in Figure 9. A resolution of 64x48 was selected. Figure 1 shows the estimation error in XYZ, as well as the distance between each of the cameras and the quadrotor. As seen, the position estimation error in the X and Z axes is around 15 cm, however, it reach the 3 m in the Y axis when the distance from cameras is maximum. In general, the most parallel the optical axes of the cameras are, the higher the error in depth estimation is. Y E Figure 9. Camera configuration with parallel optical axes in the Y-axis of the global frame and fixed quadrotor 29

31 distance [m] Distance to each fo the cameras Distance to camera 1 Distance to camera time [s] Estimation error 2 error [m] time [s] Figure 1. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and parallel optical axes. Blue marks * correspond to instants with tracking loss from one of the cameras Fixed quadrotor with orthogonal camera configuration Now the configuration is the one shown in Figure 11. The optical axes of both cameras are orthogonal, corresponding to the best case for the depth estimation. This fact is confirmed by the results shown in Figure 12, where it can be seen that the estimation error has been reduced considerably. Figure 11. Orthogonal configuration of the cameras 3

32 6 Distance to each fo the cameras distance [m] Distance to camera 1 Distance to camera time [s] Estimation error.5 error [m] time [s] Figure 12. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and orthogonal configuration of the cameras Fixed quadrotor with moving cameras This experiment is a combination of the above two. At the beginning, the cameras are in parallel with a short base line. That is why the estimation error shown in Figure 13 is initially high. Then, the cameras are moved until they reach the orthogonal configuration (in t = 17 s), reducing at the same time the error. Figure 14 represent in more detail the evolution of the estimation error when tracking loss occurs from t = 32.5 s until t = 34.5 s. In two seconds, the estimation error in the Y-axis change in 5 cm due to the integration of the speed in the position estimation. In this case, the estimation is computed using monocular images from a single camera. 31

33 6 Distance to each fo the cameras distance [m] time [s] Estimation error 1 error [m] time [s] Figure 13. Estimation error and distance with cameras with fixed quadrotor and moving cameras, initially with parallel optical axes and finally with orthogonal configuration error - distance [m] Estimation error and distance with cameras time [s] Figure 14. Evolution of the position estimation error with multiple tracking losses (marked by a * character) in one of the cameras 32

34 Fixed quadrotor with orthogonal camera configuration and tracking loss The goal of this experiment is to study the effect of long-term tracking loss from one or both cameras over the position estimation. The quadrotor was in a fixed position with the cameras in an orthogonal configuration, as shown in Figure 15. Here, the quadrotor is out of the Field of View (FoV) for the right camera. The estimation error results have been represented in Figure 16. The green and blue characters * represent tracking loss from left or right camera, while red character * correspond to tracking loss from both cameras simultaneously. The distance between each of the cameras to the tracked quadrotor has also been plotted in magenta and black. As it can be seen, the error grows rapidly when the vision-based estimation becomes monocular. The error is increased in 1 meter in around 4 seconds. The number of consecutive frames with tracking loss can be used as a criterion for rejecting the position estimation, defining a maximum threshold. This idea is shown in Figure 17, where it has been represented the number of consecutive frames with tracking loss and a threshold of 15 frames. Figure 15. Orthogonal camera configuration with tracked quadrotor out of the FoV for one of the cameras 33

35 4 Distance to each fo the cameras distance [m] time [s] Estimation error 5 error [m] time [s] Figure 16. Position estimation error and distance with cameras with long duration tracking loss for one of the cameras (blue and green * characters) and both cameras (red * characters) 1 Number of consecutive frames with tracking loss Threshold time [s] Figure 17. Number of consecutive frames with tracking loss (blue) and threshold (red) Z-axis estimation with flying quadrotor and parallel camera configuration In this experiment the quadrotor height is estimated with the cameras being in the configuration indicated in Figure 18 and a resolution of 128x72 pixels. The pilot of the quadrotor was asked to perform movements along the Z-axis with two meters amplitude. 34

Robot Perception Continued

Robot Perception Continued Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 Background of the Research Agile and precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and law enforcement task to inspection

More information

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle Miguel A. Olivares-Méndez mig olivares@hotmail.com Pascual Campoy Cervera pascual.campoy@upm.es Iván Mondragón ivanmond@yahoo.com Carol

More information

Path Tracking for a Miniature Robot

Path Tracking for a Miniature Robot Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking

More information

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com

More information

Onboard electronics of UAVs

Onboard electronics of UAVs AARMS Vol. 5, No. 2 (2006) 237 243 TECHNOLOGY Onboard electronics of UAVs ANTAL TURÓCZI, IMRE MAKKAY Department of Electronic Warfare, Miklós Zrínyi National Defence University, Budapest, Hungary Recent

More information

Video Tracking Software User s Manual. Version 1.0

Video Tracking Software User s Manual. Version 1.0 Video Tracking Software User s Manual Version 1.0 Triangle BioSystems International 2224 Page Rd. Suite 108 Durham, NC 27703 Phone: (919) 361-2663 Fax: (919) 544-3061 www.trianglebiosystems.com Table of

More information

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME by Alex Sirota, alex@elbrus.com Project in intelligent systems Computer Science Department Technion Israel Institute of Technology Under the

More information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

More information

How To Use Trackeye

How To Use Trackeye Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction TrackEye is the world leading system for motion

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Robust and accurate global vision system for real time tracking of multiple mobile robots

Robust and accurate global vision system for real time tracking of multiple mobile robots Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

Control of a quadrotor UAV (slides prepared by M. Cognetti)

Control of a quadrotor UAV (slides prepared by M. Cognetti) Sapienza Università di Roma Corso di Laurea in Ingegneria Elettronica Corso di Fondamenti di Automatica Control of a quadrotor UAV (slides prepared by M. Cognetti) Unmanned Aerial Vehicles (UAVs) autonomous/semi-autonomous

More information

Traffic Monitoring Systems. Technology and sensors

Traffic Monitoring Systems. Technology and sensors Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal

More information

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving 3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems

More information

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras 1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Mobile Robot FastSLAM with Xbox Kinect

Mobile Robot FastSLAM with Xbox Kinect Mobile Robot FastSLAM with Xbox Kinect Design Team Taylor Apgar, Sean Suri, Xiangdong Xi Design Advisor Prof. Greg Kowalski Abstract Mapping is an interesting and difficult problem in robotics. In order

More information

Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam

Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam INSTRUCTIONS: Use a pencil #2 to fill your scantron. Write your code number and bubble it in under "EXAM NUMBER;" an entry

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

More information

Introduction. www.imagesystems.se

Introduction. www.imagesystems.se Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction Motion is the world leading software for advanced

More information

Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes:

Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes: What is AOI? Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes: Test methods for electronic assemblies: - FT (Functional

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical

More information

E190Q Lecture 5 Autonomous Robot Navigation

E190Q Lecture 5 Autonomous Robot Navigation E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES 1 / 23 CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES MINH DUC HUA 1 1 INRIA Sophia Antipolis, AROBAS team I3S-CNRS Sophia Antipolis, CONDOR team Project ANR SCUAV Supervisors: Pascal MORIN,

More information

V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender

V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM by Ketan Surender A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science (Electrical Engineering)

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

How To Analyze Ball Blur On A Ball Image

How To Analyze Ball Blur On A Ball Image Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:

More information

Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment

Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment Digital Photogrammetric System Version 6.0.2 USER MANUAL Table of Contents 1. Purpose of the document... 4 2. General information... 4 3. The toolbar... 5 4. Adjustment batch mode... 6 5. Objects displaying

More information

AP Series Autopilot System. AP-202 Data Sheet. March,2015. Chengdu Jouav Automation Tech Co.,L.t.d

AP Series Autopilot System. AP-202 Data Sheet. March,2015. Chengdu Jouav Automation Tech Co.,L.t.d AP Series Autopilot System AP-202 Data Sheet March,2015 Chengdu Jouav Automation Tech Co.,L.t.d AP-202 autopilot,from Chengdu Jouav Automation Tech Co., Ltd, provides complete professional-level flight

More information

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan Research Methodology Part III: Thesis Proposal Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan Outline Thesis Phases Thesis Proposal Sections Thesis Flow Chart

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Multi-Touch Control Wheel Software Development Kit User s Guide

Multi-Touch Control Wheel Software Development Kit User s Guide Multi-Touch Control Wheel Software Development Kit User s Guide V3.0 Bulletin #1204 561 Hillgrove Avenue LaGrange, IL 60525 Phone: (708) 354-1040 Fax: (708) 354-2820 E-mail: instinct@grayhill.com www.grayhill.com/instinct

More information

Overview Image Acquisition of Microscopic Slides via Web Camera

Overview Image Acquisition of Microscopic Slides via Web Camera MAX-PLANCK-INSTITUT FÜR MARINE MIKROBIOLOGIE ABTEILUNG MOLEKULARE ÖKOLOGIE Overview Image Acquisition of Microscopic Slides via Web Camera Andreas Ellrott and Michael Zeder Max Planck Institute for Marine

More information

Virtual CRASH 3.0 Staging a Car Crash

Virtual CRASH 3.0 Staging a Car Crash Virtual CRASH 3.0 Staging a Car Crash Virtual CRASH Virtual CRASH 3.0 Staging a Car Crash Changes are periodically made to the information herein; these changes will be incorporated in new editions of

More information

Crater detection with segmentation-based image processing algorithm

Crater detection with segmentation-based image processing algorithm Template reference : 100181708K-EN Crater detection with segmentation-based image processing algorithm M. Spigai, S. Clerc (Thales Alenia Space-France) V. Simard-Bilodeau (U. Sherbrooke and NGC Aerospace,

More information

5-Axis Test-Piece Influence of Machining Position

5-Axis Test-Piece Influence of Machining Position 5-Axis Test-Piece Influence of Machining Position Michael Gebhardt, Wolfgang Knapp, Konrad Wegener Institute of Machine Tools and Manufacturing (IWF), Swiss Federal Institute of Technology (ETH), Zurich,

More information

Encoders for Linear Motors in the Electronics Industry

Encoders for Linear Motors in the Electronics Industry Technical Information Encoders for Linear Motors in the Electronics Industry The semiconductor industry and automation technology increasingly require more precise and faster machines in order to satisfy

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

How does the Kinect work? John MacCormick

How does the Kinect work? John MacCormick How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured

More information

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

Free Fall: Observing and Analyzing the Free Fall Motion of a Bouncing Ping-Pong Ball and Calculating the Free Fall Acceleration (Teacher s Guide)

Free Fall: Observing and Analyzing the Free Fall Motion of a Bouncing Ping-Pong Ball and Calculating the Free Fall Acceleration (Teacher s Guide) Free Fall: Observing and Analyzing the Free Fall Motion of a Bouncing Ping-Pong Ball and Calculating the Free Fall Acceleration (Teacher s Guide) 2012 WARD S Science v.11/12 OVERVIEW Students will measure

More information

Orbital Mechanics. Angular Momentum

Orbital Mechanics. Angular Momentum Orbital Mechanics The objects that orbit earth have only a few forces acting on them, the largest being the gravitational pull from the earth. The trajectories that satellites or rockets follow are largely

More information

KINEMATICS OF PARTICLES RELATIVE MOTION WITH RESPECT TO TRANSLATING AXES

KINEMATICS OF PARTICLES RELATIVE MOTION WITH RESPECT TO TRANSLATING AXES KINEMTICS OF PRTICLES RELTIVE MOTION WITH RESPECT TO TRNSLTING XES In the previous articles, we have described particle motion using coordinates with respect to fixed reference axes. The displacements,

More information

Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles

Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles CS11 59 Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles Satadal Saha 1, Subhadip Basu 2 *, Mita Nasipuri 2, Dipak Kumar Basu # 2 # AICTE Emeritus Fellow 1 CSE

More information

Rotation: Moment of Inertia and Torque

Rotation: Moment of Inertia and Torque Rotation: Moment of Inertia and Torque Every time we push a door open or tighten a bolt using a wrench, we apply a force that results in a rotational motion about a fixed axis. Through experience we learn

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

2-1 Position, Displacement, and Distance

2-1 Position, Displacement, and Distance 2-1 Position, Displacement, and Distance In describing an object s motion, we should first talk about position where is the object? A position is a vector because it has both a magnitude and a direction:

More information

Effective Use of Android Sensors Based on Visualization of Sensor Information

Effective Use of Android Sensors Based on Visualization of Sensor Information , pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,

More information

Synthetic Sensing: Proximity / Distance Sensors

Synthetic Sensing: Proximity / Distance Sensors Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Abstract. Introduction

Abstract. Introduction SPACECRAFT APPLICATIONS USING THE MICROSOFT KINECT Matthew Undergraduate Student Advisor: Dr. Troy Henderson Aerospace and Ocean Engineering Department Virginia Tech Abstract This experimental study involves

More information

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Poker Vision: Playing Cards and Chips Identification based on Image Processing Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence

More information

2. Dynamics, Control and Trajectory Following

2. Dynamics, Control and Trajectory Following 2. Dynamics, Control and Trajectory Following This module Flying vehicles: how do they work? Quick refresher on aircraft dynamics with reference to the magical flying space potato How I learned to stop

More information

Virtual Mouse Using a Webcam

Virtual Mouse Using a Webcam 1. INTRODUCTION Virtual Mouse Using a Webcam Since the computer technology continues to grow up, the importance of human computer interaction is enormously increasing. Nowadays most of the mobile devices

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Sensors and Cellphones

Sensors and Cellphones Sensors and Cellphones What is a sensor? A converter that measures a physical quantity and converts it into a signal which can be read by an observer or by an instrument What are some sensors we use every

More information

Spatial location in 360 of reference points over an object by using stereo vision

Spatial location in 360 of reference points over an object by using stereo vision EDUCATION Revista Mexicana de Física E 59 (2013) 23 27 JANUARY JUNE 2013 Spatial location in 360 of reference points over an object by using stereo vision V. H. Flores a, A. Martínez a, J. A. Rayas a,

More information

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES

More information

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm 1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

More information

A Proposal for OpenEXR Color Management

A Proposal for OpenEXR Color Management A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used

More information

Course 8. An Introduction to the Kalman Filter

Course 8. An Introduction to the Kalman Filter Course 8 An Introduction to the Kalman Filter Speakers Greg Welch Gary Bishop Kalman Filters in 2 hours? Hah! No magic. Pretty simple to apply. Tolerant of abuse. Notes are a standalone reference. These

More information

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses Aerospace Information Technology s for Internships and Bachelor s and Master s Theses Version Nov. 2014 The Chair of Aerospace Information Technology addresses several research topics in the area of: Avionic

More information

Lecture 14. Point Spread Function (PSF)

Lecture 14. Point Spread Function (PSF) Lecture 14 Point Spread Function (PSF), Modulation Transfer Function (MTF), Signal-to-noise Ratio (SNR), Contrast-to-noise Ratio (CNR), and Receiver Operating Curves (ROC) Point Spread Function (PSF) Recollect

More information

Current Challenges in UAS Research Intelligent Navigation and Sense & Avoid

Current Challenges in UAS Research Intelligent Navigation and Sense & Avoid Current Challenges in UAS Research Intelligent Navigation and Sense & Avoid Joerg Dittrich Institute of Flight Systems Department of Unmanned Aircraft UAS Research at the German Aerospace Center, Braunschweig

More information

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and

More information

MP2128 3X MicroPilot's. Triple Redundant UAV Autopilot

MP2128 3X MicroPilot's. Triple Redundant UAV Autopilot MP2128 3X MicroPilot's Triple Redundant UAV Autopilot Triple redundancy (3X) gives autopilot technology the reliability necessary to safely carry out sensitive flight missions and transport valuable payloads.

More information

3D MODEL DRIVEN DISTANT ASSEMBLY

3D MODEL DRIVEN DISTANT ASSEMBLY 3D MODEL DRIVEN DISTANT ASSEMBLY Final report Bachelor Degree Project in Automation Spring term 2012 Carlos Gil Camacho Juan Cana Quijada Supervisor: Abdullah Mohammed Examiner: Lihui Wang 1 Executive

More information

ZMART Technical Report The International Aerial Robotics Competition 2014

ZMART Technical Report The International Aerial Robotics Competition 2014 ZMART Technical Report The International Aerial Robotics Competition 2014 ZJU s Micro-Aerial Robotics Team (ZMART) 1 Zhejiang University, Hangzhou, Zhejiang Province, 310027, P.R.China Abstract The Zhejiang

More information

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and

More information

UAV Pose Estimation using POSIT Algorithm

UAV Pose Estimation using POSIT Algorithm International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 UAV Pose Estimation using POSIT Algorithm *1 M. He, 2. Ratanasawanya, 3 M. Mehrandezh, 4 R. Paranjape

More information

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving 3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Manfred Gruber Safe and Autonomous Systems

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

PRODUCT SHEET. info@biopac.com support@biopac.com www.biopac.com

PRODUCT SHEET. info@biopac.com support@biopac.com www.biopac.com EYE TRACKING SYSTEMS BIOPAC offers an array of monocular and binocular eye tracking systems that are easily integrated with stimulus presentations, VR environments and other media. Systems Monocular Part

More information

MSc in Autonomous Robotics Engineering University of York

MSc in Autonomous Robotics Engineering University of York MSc in Autonomous Robotics Engineering University of York Practical Robotics Module 2015 A Mobile Robot Navigation System: Labs 1a, 1b, 2a, 2b. Associated lectures: Lecture 1 and lecture 2, given by Nick

More information

False alarm in outdoor environments

False alarm in outdoor environments Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,

More information

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow , pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

More information

Whitepaper. Image stabilization improving camera usability

Whitepaper. Image stabilization improving camera usability Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3

More information

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant

More information

Optical Digitizing by ATOS for Press Parts and Tools

Optical Digitizing by ATOS for Press Parts and Tools Optical Digitizing by ATOS for Press Parts and Tools Konstantin Galanulis, Carsten Reich, Jan Thesing, Detlef Winter GOM Gesellschaft für Optische Messtechnik mbh, Mittelweg 7, 38106 Braunschweig, Germany

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract

More information

How To Program With Adaptive Vision Studio

How To Program With Adaptive Vision Studio Studio 4 intuitive powerful adaptable software for machine vision engineers Introduction Adaptive Vision Studio Adaptive Vision Studio software is the most powerful graphical environment for machine vision

More information

How To Control Gimbal

How To Control Gimbal Tarot 2-Axis Brushless Gimbal for Gopro User Manual V1.0 1. Introduction Tarot T-2D gimbal is designed for the Gopro Hero3, which is widely used in film, television productions, advertising aerial photography,

More information

Basler. Line Scan Cameras

Basler. Line Scan Cameras Basler Line Scan Cameras High-quality line scan technology meets a cost-effective GigE interface Real color support in a compact housing size Shading correction compensates for difficult lighting conditions

More information

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper

More information

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering

More information