VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION"

Transcription

1 VISION-BASED POSITION ESTIMATION IN MULTIPLE QUADROTOR SYSTEMS WITH APPLICATION TO FAULT DETECTION AND RECONFIGURATION MASTER THESIS, SCHOOL OF ENGINEERS, UNIVERSITY OF SEVILLE Author Alejandro Suárez Fernández-Miranda Supervising Teachers Dr. Guillermo Heredia Benot Dr. Aníbal Ollero Baturone

2 1

3 A strong man doesn't need to read the future, he makes his own. Solid Snake - Metal Gear Solid 2

4 3

5 Table of Content 1. INTRODUCTION Introduction General description of the project Related works Vision-based position estimation Visual tracking algorithms FDIR Quadrotor dynamic modeling and control Development time estimation Vision-based position estimation in multiple quadrotor systems Problem description Model of the system Position estimation algorithm Visual tracking algorithm Experimental results Software implementation Description of the experiments Analysis of the results Fixed quadrotor with parallel cameras Fixed quadrotor with orthogonal camera configuration Fixed quadrotor with moving cameras Fixed quadrotor with orthogonal camera configuration and tracking loss Z-axis estimation with flying quadrotor and parallel camera configuration Depth and lateral motion in the XY plane Quadrotor executing circular and random trajectories Accuracy of the estimation Summary Evolution in the development of the vision-based position estimation system Application of virtual sensor to Fault Detection and Identification Introduction Additive positioning sensor fault Lock-in-place fault

6 3.4. Criterion for virtual sensor rejection Threshold function Simulation of perturbations in the virtual sensor over quadrotor trajectory control Introduction Quadrotor trajectory control Dynamic model Attitude and height control Velocity control Trajectory generation Model of perturbations Sampling rate Delay Noise Outliers Packet loss Simulation results Different speeds and delays Noise and outliers with fixed delay Delay, noise, outliers and packet loss Conclussions REFERENCES

7 List of Figures Figure 1. Estimated percentage of the development time for each of the phases of the project Figure 2. Gantt diagram with the evolution of the project Figure 3. Two quadrotors with cameras in their base tracking a third quadrotor whose position want to be estimated, represented by the green ball Figure 4. Images taken during data acquisition experiments at the same time from both cameras, with two orange balls at the top of a Hummingbird quadrotor Figure 5. Relative position vectors between the cameras and the tracked quadrotor Figure 6. Pin-hole camera model... 2 Figure 7. State machine implemented in the modified versión of the CAMShift algorithm Figure 8. Orange rubber hat at the top of the Hummingbird quadrotor used as visual marker 28 Figure 9. Camera configuration with parallel optical axes in the Y-axis of the global frame and fixed quadrotor Figure 1. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and parallel optical axes. Blue marks * correspond to instants with tracking loss from one of the cameras... 3 Figure 11. Orthogonal configuration of the cameras... 3 Figure 12. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and orthogonal configuration of the cameras Figure 13. Estimation error and distance with cameras with fixed quadrotor and moving cameras, initially with parallel optical axes and finally with orthogonal configuration Figure 14. Evolution of the position estimation error with multiple tracking losses (marked by a * character) in one of the cameras Figure 15. Orthogonal camera configuration with tracked quadrotor out of the FoV for one of the cameras Figure 16. Position estimation error and distance with cameras with long duration tracking loss for one of the cameras (blue and green * characters) and both cameras (red * characters) Figure 17. Number of consecutive frames with tracking loss (blue) and threshold (red) Figure 18. Configuration of the cameras and the quadrotor for the Z-axis estimation experiment Figure 19. Vicon height measurement (red) and vision-based estimation (blue) Figure 2. Position error estimation in XYZ (blue, green, red) and distance between each of the cameras and the tracked quadrotor (magenta, black) Figure 21. Configuration of the cameras for the experiment with depth (YE axis) and lateral motion (XE axis) Figure 22. X-axis estimation with depth and lateral motion for the quadrotor Figure 23. Y-axis estimation with depth and lateral motion for the quadrotor

8 Figure 24. Number of consecutive frames with tracking loss with depth and lateral motion for the quadrotor Figure 25. Configuration of the cameras with the quadrotor executing circular and random trajectories... 4 Figure 26. X-axis estimation and real position when the quadrotor is executing circular and random trajectories... 4 Figure 27. Y-axis estimation and real position when the quadrotor is executing circular and random trajectories Figure 28. Number of consecutive frames with tracking loss when the quadrotor is executing circular and random trajectories Figure 29. Simulation of GPS data with drift error between t = 2 s and t = 3 s Figure 3. Distance between position given by GPS and vision-based estimation with GPS drift error Figure 31. GPS simulated data with fixed measurement from t = 11 s Figure 32. Distance between vision-based position estimation and GPS with faulty data (black), and threshold (red)... 5 Figure 33. Number of consecutive frames with tracking loss and threshold for rejecting virtual sensor estimation Figure 34. Angle and distance between the camera and the tracked object Figure 35. Estimation error and distance with cameras with the cameras changing from parallel to orthogonal configuration Figure 36. Distance between GPS simulated data and estimated position and threshold corresponding to Figure Figure 37. Two quadrotors with cameras in their base tracking a third quadrotor whose position want to be estimated, represented by the green ball Figure 38. Images taken during data acquisition experiments at the same time from both cameras, with two orange balls at the top of a Hummingbird quadrotor Figure 39. Reference path and trajectories followed in XY plane with different values of delay in XY position measurement, fixed delay of 1 ms in height measurement and V =,5 m s Figure 4. External estimation of XY position with Gaussian noise, outliers and a reference speed of V =,5 m s Figure 41. Trajectories followed by the quadrotor with noise and outliers (blue) and without them (black) Figure 42. Quadrotor trajectories with simultaneous application of noise, delay, outliers and packet loss for V =,5 m s -1 (blue) and V =,75 m s -1 (green) Figure 43. Reference and real value for height when XY position is affected by noise, delay, outliers and packet loss with a reference speed of V =,75 m s

9 Acknowledgments This work was partially funded by the European Commission under the FP7 Integrated Project EC-SAFEMOBIL (FP ) and the CLEAR Project (DPI C2-1) funded by the Ministerio de Ciencia e Innovacion of the Spanish Government. The author wishes to acknowledge the support received by the CATEC during the experiments carried out in its testbed. Special thanks to Miguel Ángel Trujillo and Jonathan Ruiz from CATEC, and to professors José Ramiro Martínez de Dios and Begoña C. Arrúe Ullés from the University of Seville, for their help. Finally, the author wants to remark all the help and advice provided by the supervisors Guillermo Heredia Benot and Aníbal Ollero Baturone. 8

10 Publications Accepted papers Suárez, A., Heredia, G., Ollero, A.: Analysis of Perturbations in Trajectory Control Using Visual Estimation in Multiple Quadrotor Systems. First Iberian Robotics Conference (213) Awaiting acceptance papers Suárez, A., Heredia, G., Martínez-de-Dios, R., Trujillo, M.A., Ollero, A.: Cooperative Vision-Based Virtual Sensor for MultiUAV Fault Detection. International Conference on Robotics and Automation (214) 9

11 1. INTRODUCTION 1.1. Introduction Position estimation is an important issue in many mobile robotics applications, where automatic position or trajectory control is required. This problem can be found in very different scenarios, including both terrestrial and aerial robots, with different specifications in accuracy, reliability, cost, weight, size or computational resources. Two main approaches can be considered: position estimation based on odometry, beacons or any other internal sensors, or using a global positioning system such as GPS or Galileo. Each of these technologies has its advantages and disadvantages. The selection of a specific device will depend on the particular application, namely, the specifications in the operation conditions of the robot. For example, it is well known that GPS sensors only work with satellite visibility, so they cannot operate in indoors, but they are extensively used in fixed-wing UAVs and other outdoor exploration vehicles. Another drawback of these devices is their low accuracy, with position errors around two meters, although centimeter accuracies can be obtained with Differential GPS (DGPS). For small indoor wheeled robots, a simple and low cost solution is to use odomety methods, integrating speed or acceleration information obtained from optical encoders or Inertial Measurement Units (IMU). However, the lack of a position reference will cause a drift error along the time, so the estimation might become useless after a few seconds. In recent years, a great effort to solve the Simultaneous Localization and Mapping (SLAM) problem has been dedicated, making possible its application in real time, although it still carries high computational costs. The current trend is to integrate multiple sources of information, fusing their data in order to obtain a better estimation for accuracy and reliability. The sensors can be both static at fixed positions or mounted over mobile robots. Multi-robot systems for instance are platforms were these techniques can be implemented naturally. This work is focused in multi-quadrotor systems with a camera mounted in the base of each UAV, so the position of a certain quadrotor will be obtained from the centroid of its projection over the image planes and the position and orientation of the cameras. Moreover, the Kalman filter used as estimator will also provide the velocity of the vehicle. The external estimation obtained (position, orientation or velocity) can be used for controlling the vehicle. However, some aspects such as estimation errors, delays or estimation availability have to be considered carefully. The effects of new perturbations introduced in the control loop should be analyzed in simulation previous to its application in the real system, so potential accidents causing human or material damages can be avoided. 1

12 1.2. General description of the project The goal of this work is the development of a system for obtaining a position estimation of a quadrotor being visually tracked by two cameras whose position and orientation are known. A simulation study based on data obtained from experiments will be carried for detecting failures on internal sensors, allowing the system reconfiguration to keep the system under control. If the vision-based position estimation provided by the virtual sensor is going to be used in position or trajectory control, it is convenient to study the effects of the associated perturbations (delays, tracking loss, noise, outliers) over the control. Before testing it in real conditions, with the associated risk of accidents and human or material damages, it is preferable to analyze the performance of the controller in simulation. So a simulator of the quadrotor dynamics and its trajectory control system, including the simulation of the identified perturbations, was built. The position and velocity of the tracked object on the 3D space will be obtained from an Extended Kalman Filter (EKF), taking as input the centroid of the object on every image plane of the cameras, as well as their position and orientation. Two visual tracking algorithms were used in this project: the Tracking-Learning-Detection (TLD), and a modified version of the CAMShift algorithm. However, TLD algorithm was rejected due to high computational costs and its bad results applied to the quadrotor tracking, as it is based on template matching. On the other hand, CAMShift algorithm is a color-based tracking algorithm that uses the HSV color space for extracting color information (Hue component) in a single channel image, simplifying the object identification and making it robust to illumination and appearance changes. In multi-uav systems such as formation flight, cooperative surveillance and monitoring or aerial refueling, every robot might carry additional sensors, not for their own control, but for estimating part of the state of other vehicle, for example its position, velocity or orientation. This external estimation can be seen as a virtual sensor, in the sense that it provides a measurement of a certain signal computed from other sensors. In normal conditions, both internal and virtual sensors should provide similar measurements. However, consider a situation with a UAV approaching to an area without satellite visibility, so its GPS sensor is not able to provide position data but the IMU keeps integrating acceleration, increasing error with the time, and then the difference between both sources becomes significant. If a certain threshold is exceeded, the GPS can be considered as faulty, starting a reconfiguration process that handles this situation. For the external position estimation, a communication network is necessary for the interchange of the information used on its computation (time stamp, position and orientation of the cameras, centroid of the tracked object in the image plane). Although this is beyond the scope of this work, communication delays and packet losses should be taken into account when the virtual sensor is going to be used for controlling the UAV. A quadrotor simulator with its trajectory control system has been developed for studying the effects of a number of perturbations identified during experiments, including those related to communications. The simulator was implemented as a MATLAB-Simulink block diagram that includes quadrotor dynamics, attitude, position and trajectory controllers, and a way-point 11

13 generator. Graphical and numerical results are shown in different conditions, highlighting the most important aspects in each case. These results should be used as reference only, as the effects of perturbations over quadrotor performance will depend on the control scheme being used. Finally, all position estimation experiments were performed hand holding the cameras: they were not mounted in the base of any quadrotor. What is more, both cameras used were connected to the same computer through a five-meter cable, what limited the movements around the tracked UAV. In the next step of the project (not considered here), the cameras will be mounted on the quadrotors, and image processing will be done onboard or in a ground control station. The onboard image acquisition and processing introduces additional problems such as vibrations, weight limitations, or available bandwidth Related works The main contribution of this work is the application of the visual position estimation to the Fault Detection and Identification (FDI). However, a number of issues have also been treated, including visual tracking algorithms, quadrotor dynamic modeling and quadrotor control Vision-based position estimation The problem of multi-uav position estimation in the context of forest fire detection has been treated in [1], estimating motion from multiple planar homographies. Accelerometer, gyroscope and visual sensor measurements are combined in [2] using a non-linear complementary filter for estimating pose and linear velocity in an aerial robot. Simultaneously Localization and Mapping (SLAM) problem has been applied to small UAVs in outdoors, in partially structured environments [3]. Quadrotor control using onboard or ground cameras is described in [4] and [5]. Here both position and orientation measurements are computed from the images provided by a pair of cameras. Homography techniques are combined with a Kalman filter in [6] for obtaining UAV position estimation when building mosaics. Other applications where vision-based position estimation can be employed include formation flight and aerial refueling [7], [8], [9], [1] Visual tracking algorithms Visual tracking algorithms with application to position estimation of moving objects have to be fast enough to provide an accurate estimation. As commented earlier, TLD algorithm [11] was tested in the first place due to its ability to adapt to changes in the appearance of the object. However, as this algorithm is based on template matching and the surfaces of the quadrotors 12

14 are not big enough, most of the time the tracking was lost. On the other hand, the execution time was too high due to the high number of operations involved in the correlations with the template list. Color-based tracking algorithms such as CAMShift [12] present good properties for this purpose, including simplicity, low computation time, invariance to changes in the illumination, rotation and position, or noise rejection. A color marker in contrast with the background has to be disposed in the object to be tracked. The problem of this algorithm appears when an object with similar color is in the image, although it can be solved considering additional features. The basic CAMShift assumes that tracked object is always visible on the image, so it cannot handle tracking losses. Some modifications have been done to CAMShift to make possible tracking recovery when object is temporarily occluded, when it changes its appearance or when similarly colored objects are contained in the scene [13]. A Kalman filter is used in [14] for handling occlusions, while a multidimensional color histogram in combination with motion information solve the problem of distinguishing color FDIR Reliability and fault tolerance has always been an important issue in UAVs [2], where Fault Detection and Identification (FDI) techniques play an important role in the efforts to increase the reliability of the systems. It is even more important when teams of aerial vehicles cooperate closely between them and the environment, as it is the case in formation flight and heterogeneous UAV teams, because collisions between them or between the vehicles and objects of the environment may arise. In a team of cooperating autonomous vehicles, FDI of individual vehicles in which they use their own sensors for FDI can be regarded as Component Level (CL) FDI. Most CL-FDI applications to UAVs that appear in the literature use model-based methods, which try to diagnose faults using the redundancy of some mathematical description of the system dynamics. Model-based CL-FDI has been applied to unmanned aircraft, either fixed wing UAVs [21] or helicopter UAVs [22][23][24]. The Team Level (TL) FDI exploits the team information for detection of faults. Most published works rely on transmission of the state of the vehicles through the Communications channel for TL-FDI [25]. What has not been thoroughly explored is the use of the sensors onboard the other vehicles of the team for detection of faults in an autonomous vehicle, which requires sensing the state of a vehicle from the other team components Quadrotor dynamic modeling and control Quadrotor modeling and control has been extensively treated in literature. The derivation of the dynamic model is described in detail in [15]. Some control methods applied in simulation and in real conditions can be found here too. PID, LQ and Backstepping controllers have been tested in [16] and [17] over an indoor micro quadrotor. Mathematical modeling and experimental results in quadrotor trajectory generation and control can be found in [18]. Ref. 13

15 [19] addresses the same problem but the trajectory generation allows the execution of aggressive maneuvers Development time estimation The development of this project can be divided into three phases: Development of the vision-based position estimation system Development of the quadrotor trajectory control simulator Documentation (papers, reports, memories) The estimation of the percentage of time dedicated to each of these phases has been represented in Figure 1. The Gantt diagram with the identified tasks and their start date and end date can be seen in Figure 2. The project started in November 212, with the technical part being finished in June 213. Since then, two papers have been sent to ROBOT 213 congress (accepted) and ICRA 214 (awaiting acceptance), and the project report has been written. Percentage of the Development Time Position estimation Simulation of perturbations Documentation Figure 1. Estimated percentage of the development time for each of the phases of the project 14

16 Figure 2. Gantt diagram with the evolution of the project 15

17 16

18 2. Vision-based position estimation in multiple quadrotor systems 2.1. Problem description Consider a situation with three quadrotors A, B and C. Two of them, A and B, have cameras mounted in their base with known position and orientation referred to a global frame. Images taken from cameras are sent along with their position and orientation to a ground station. Both cameras will try to stay focused on the third quadrotor, C, so a tracking algorithm will be applied to obtain the centroid of the object on every received image. An external position estimator executed in the ground station will use this data to obtain an estimation of quadrotor C position that can be used for position or trajectory control in the case C does not have this kind of sensors, they are damaged, or they are temporarily unavailable. The situation described above has been shown in Figure 3. Here the cones represent the field of view of the cameras, the orange quadrotor is the one being tracked and the green ball corresponds to its position estimation. Figure 3. Two quadrotors with cameras in their base tracking a third quadrotor whose position want to be estimated, represented by the green ball One of the main issues in vision-based position estimation applied to trajectory or position control is the presence of delays in the control loop, which should not be too high to prevent the system of becoming unstable. The following sources of delay can be identified: Image acquisition delay Image transmission through radio link Image processing for tracking algorithm Position estimation and its transmission The first two are imposed by hardware and available bandwidth. The last one is negligible in comparison with the others. On the other hand, image processing is very dependent on the 17

19 computation cost required by the tracking algorithm. In this work, the external position estimation system was developed and tested with real data, obtaining position and orientation of cameras and tracked quadrotor from a Vicon Motion Capture System in the CATEC testbed. The visual tracking algorithm used was a modified version of CAMShift algorithm. This colorbased tracking algorithm uses Hue channel in the HSV image representation for building a model of the object and detecting it, applying Mean-Shift for computing the centroid of the probability distribution. As this algorithm is based only in color information, a small orange ball was disposed at the top of the tracked quadrotor, in contrast with the blue floor of the testbed. Figure 4 shows two images captured by the cameras during data acquisition phase. Figure 4. Images taken during data acquisition experiments at the same time from both cameras, with two orange balls at the top of a Hummingbird quadrotor Although in practical application the external position estimation process will run in real time, here the computations were done off-line in order to make easier the development and debug of this system, so the estimation was carried out in two phases: 1) The data acquisition phase, where images and the measurements of the position and orientation of both cameras and the tracked object were captured along with the time stamp and saved into a file and a directory containing all images. 2) The position estimation phase, corresponding to the execution of the extended Kalman filter that makes use of the captured data to provide an off-line estimation of the quadrotor position at every instant indicated by the time stamp. As normal cameras do not provide depth information (unless other constraints are considered, such as tracked object size), two or more cameras are needed in order to obtain the position of the quadrotor in the three-dimensional space. Even with one camera, if it changes its position and orientation and the tracked object movement is reduced, the position can be estimated. One of the main advantages of using Kalman filter is its ability to integrate multiple sources of information, in the sense that it will try to provide the best estimation independently on the number of observations available at a certain instant. The results of the experiments presented here were obtained with two cameras, although in some cases the tracked object was occluded or out of the field of view (FoV) for one or both cameras. The 18

20 extended Kalman filter equations described later were obtained for two cameras, but they can be easily modified to consider an arbitrary number of cameras Model of the system The system for the vision-based position estimation of a moving object using two cameras is represented in Figure 5. It is assumed for the cameras to be mounted on the quadrotors, but for clarity, they have not been drawn. Figure 5. Relative position vectors between the cameras and the tracked quadrotor The position and orientation of cameras and tracked quadrotor will be referred to fixed frame {E} = {X E, Y E, Z E }. For this problem, P CAM1, P CAM2 and rotation matrixes R E CAM1 and R E CAM2 are known. The following relationships between position vectors are derived: = = + ( 1 ) + The tracking algorithm will provide the centroid of the tracked object. The pin-hole camera model relates the position of the object referred to the camera coordinate system in 3D space with its projection in the image plane. Assuming that the optical axis is X, then: = ; = ( 2 ) where f x and f y are focal length in both axes of the cameras, assumed to be equal for all cameras. Figure 6 represents the pin-hole camera model with the indicated variables. 19

21 f x x CAMn Obj Image Plane Lens Object y CAMn Obj y IMn z CAMn Obj x IMn Image Plane Lens Object f y x CAMn Obj Figure 6. Pin-hole camera model It also will be needed a model of camera lens for compensating typical radial and tangential distortion. A calibration process using chessboard pattern or circles pattern is required for obtaining distortion coefficients. There are two ways for compensate this kind of perturbation: A) Backward compensation: given the centroid of the object in the image plane, the distortion is undone so the ideal projection of the point is obtained. However, it might require numerical approximations if equations are not invertible. B) Forward compensation: position estimator will obtain an estimation of the object centroid, and is here the model of distortion is applied directly. The drawback of this solution is that distortion equations should be considered when computing jacobian matrix, otherwise a slight error must be accepted. The distorted point on the image plane is computed as follows: = (1) (2) = (1 + (1) + (2) + (5) ) + ( 3 ) Here x n = [x,y] T is the normalized image projection (without distortion), r 2 = x 2 + y 2 and and dx is the tangential distortion vector: = 2 (3) + (4) ( + 2 ) (3) ( + 2 ) + 2 (4) ( 4 ) The vector with the distortion coefficients, k c, as well as the focal length and the principal point of the cameras were obtained with the MATLAB camera calibration toolbox. 2

22 2.3. Position estimation algorithm An Extended Kalman Filter (EKF) was used for the position estimation of the tracked object from its centroid in both images and the position and orientation of the cameras. The extended version of the algorithm is used because of the presence of nonlinearities in the rotation matrix and in pin-hole camera model. For EKF application, a nonlinear state space description of the system is considered in the following way: = ( ) + = ( ) + ( 5 ) where x k is the state vector, f( ) is the state evolution function, z k is the measurement vector, h( ) is the output function, and w k and v k are Gaussian noise processes. State vector will contain position and velocity of the tracked UAV referred to fixed frame {E}, while measurement vector will contain the centroid of the object in both images given by the tracking algorithm in current instant, but also in previous one (this is done for taking into account velocity when updating estimation). These two vectors are then given by: =,,,,, =,,,,,,, ( 6 ) If no other information source can be used, a linear motion model is assumed, so system evolution function will be: + + = = + ( 7 ) Here t is the elapsed time between consecutives updates. If acceleration of the tracked quadrotor can be obtained from its internal sensors or computed from orientation, this information would be integrated in the last three terms of the system evolution function: + + = = ( 8 ) 21

23 On the other hand, it is necessary to relate the measurable variables (centroid of the tracked object on every image plane) with the state variables. This can be done through the equations of the model: ( ) + ( ) + ( ) = ( ) + ( ) + ( ) ( ) + ( ) + ( ) ( ) + ( ) + ( ) ( 9 ) Here r n ij is the ij element of the rotation matrix for the n-th camera. For computing at instant k the centroid of the object at instant k-1, we make use of the following equation: = ( 1 ) Then, an equivalent expression to (9) is obtained. Jacobian matrixes J f and J h used in EKF equations can be easily obtaining from these two expressions. However, only the matrix corresponding to the state evolution function is reproduced here due to space limitations: 1 = ( 11 ) Now let consider the general position estimation problem with N cameras. The state vector is the same as defined in (7), however, for practical reasons, the measurement vector will only contain the centroid of the tracked object at instants k and k-1 for one camera each time: =,,, ( 12 ) The vector z n k represents the measurements of camera n-th at iteration k. As the number of cameras increases, the assumption of simultaneous image acquisition might not be valid, and a model with different image acquisition times is preferred instead. This requires a synchronization process with a global time stamp indicating when the image was taken. Let define the following variables: t : time instant of last estimation update t n acq: time instant when image of camera n-th was captured 22

24 If t m acq is the time stamp of the last image captured, then: = ( 13 ) = The rest of computations are the same as described for the case with two cameras Visual tracking algorithm Visual tracking applied in the position estimation and control imposes hard restrictions in computational time, in the sense that delays in position measurements affect significantly the performance of the trajectory control, limiting the speed of the vehicle to prevent it of becoming unstable. However, vision-based tracking algorithms must include other properties like: Robustness to light conditions Noise immunity Ability to support changes in the orientation of the tracked object Low memory requirements, what usually also implies low computation time Capable to recover from temporal losses (occlusions, object out of FoV) Support image blurring due to camera motion Applicable with moving cameras It must be taken into account that quadrotors have a small surface, and its projection in the image can be very changing due to its X shape. On the other hand, using color markers do not affect the quadrotor control, but they simplify the visual detection task. In this research, we tested two tracking algorithms: TLD (Tracking-Learning-Detection) and a modified version of CAM-Shift. The first one builds a model of the tracked object while it is on execution, adding a new template to the list when a significant difference between current observation and model is detected. For smooth operation, TLD needs the tracked object to have significant edges and contours since detection is made through a correlation between templates and image patches with different position and scales. This makes its use for quadrotor tracking difficult due to the small surface and uniformity in the quadrotor shape. Experimental results show the following problems in the application of this algorithm: Significant error in centroid estimation Computation time is relatively high Too many false-positives are found 23

25 The bounding box around the object tends to diverge Tracked object is usually lost when it comes far from camera Tracked object is lost when background is not uniform Better results were obtained with a modified version of CAMShift algorithm (Continuously Adaptive Mean-Shift). CAMShift is a color-based tracking algorithm, so a color marker is required to be placed in a visible part of the quadrotor in contrast with the background color. In tests, we put two small orange balls at the top of the quadrotor, while the floor of the testbed was blue. The tracked object (the two orange balls, not the quadrotor) is represented by a histogram of the hue component containing the color distribution of the object. Here the HSV (Hue-Saturation-Value) image representation is used instead of RGB color space. This representation allows the extraction of color information and its treatment as a onedimensional magnitude, so histogram based techniques can be applied. Saturation (color density) and Value (brightness) are limited in order to reject noise and other perturbations. For every image received, the CAMShift algorithm computes a probability image weighting Hue component of every pixel with the color distribution histogram of the tracked object, so the pixels with color closer to the object will have higher probabilities. Mean-Shift algorithm is then applied to obtain the maximum of the image probability in an iterative process. The process computes the centroid of the probability distribution within a window that will slide in the direction of the maximum until its center converges. Denoting the probability image by P(x,y), then the centroid of the distribution inside searching window will be given by: = ; = ( 14 ) where M, M 1 and M 1 are zero and first order moments of image probability computed as follows: = (, ) ; = (, ) ; = (, ) ( 15 ) CAMShift algorithm will return an orientated ellipse around the tracked object, whose dimensions and orientation will be obtained from second order moments. The basic implementation of CAMShift assumes that there is a nonzero probability in the image at all times. However, if tracked object is temporarily lost from image due to occlusions or because it is out of the field of view, the algorithm must be able to detect tracked object loss and redetect it, so tracking can be reset once object is visible again. This can be seen as a two state machine, as shown in Figure 7: 24

26 Tracked Object Lost Tracking Detecting Tracked Object Found Figure 7. State machine implemented in the modified versión of the CAMShift algorithm For detecting object lost and for object redetection, a number of criterions can be used: The zero order moment, as a estimation of the object size The size, dimensions or aspect ratio of the bounding box around the object One or more templates of the near surrounding of the color marker that include the quadrotor A vector of features obtained from SURF, SIFT or similar algorithms For detection process, the incoming image will be divided into a set of patches, and for every patch, a measurement of the probability for the tracked object to be contained there is computed, and then, the detector will return the position of the most probable patch. If this path is a false positive, it will be rejected by the object loss detector, and a new detection process will begin. Otherwise, CAMShift will use this patch as initial search window. Experimental results made us conclude: CAMShift is about 5-1 times faster than TLD Modified version of CAMShift can recover in short time from object lost when this is visible again False-positive are rejected by the object loss detector CAMShift can follow objects further than TLD CAMShift requires much less computation resources than TLD 2.5. Experimental results This section presents graphical and numerical results of vision-based position estimation using the algorithms explained above. Previously, it is described the software developed specifically for this experiments, as well as the conditions, equipment and personal involved Software implementation 25

27 Three software modules were developed to support the experiments of vision-based position estimation. These experiments were divided into two phases: the data acquisition phase, and the data analysis phase. Real-time estimation experiments have not been done. Data acquisition module: this program was written in C++ for both Ubuntu 12.4 and Windows 8 Operative Systems using Eclipse Juno IDE and Microsoft Visual Studio Express 212, respectively. It makes use of Open CV (Open Computer Vision) library, as well as Vicon Data Stream SDK library. The program contains a main loop where images of the two cameras are captured and saved as individual files along with the measurements of the position and orientation of both cameras and the tracked object given by a Vicon Motion Capture System. These measurements are saved in a text file with their corresponding time stamp. Images and measurements are assumed to be captured at the same time for the estimation process, although in practical, these data are obtained sequentially, so there is a slight delay. Previously to the data acquisition loop, the user must specify the resolution of the cameras and the folder name where images and Vicon measurements are stored. Tracking and position estimation module: this program was also implemented in C++ for Ubuntu 12.4 and Windows 8, using Open CV and ROS (Robot Operating System) libraries. It was designed to accept data in real time but also data captured by the acquisition program. It has not been tested in real time. Until now, it has only been used for the off-line position estimation. The program contains a main loop with the execution of the tracking algorithm and the extended Kalman filter. It also performs the same functions as the data acquisition program, taking sequentially images from cameras as well as position and orientation measurements from Vicon. The modified version of the CAMShift algorithm returns the centroid of the tracked quadrotor for every image in both cameras. Then, the position estimation is updated with this information and visualized with the rviz application from ROS. It was found that using ROS and Vicon Data Stream libraries simultaneously causes an execution error that was reported by other users. The tracking and position estimation program is not complete jet. The selection of the tracked object is done manually drawing a rectangle around it. The modified CAMShift provides good results taking into account the fast movement of the quadrotor and the blurring of the images in some experiments, but in a number of situations it returns false positives when the tracked object is out of the field of view and there is an object with a similar color within the image. On the other hand, Kalman filter tuning takes too much time when performed with the tracking algorithm. However, the position estimation is computed from the position and orientation measurements and from the centroid of the tracked object on the image plane for both cameras, so images are no longer necessary if the centroids have already been obtained by the tracking algorithm. Position estimation module: the position estimation algorithm was implemented in a MATLAB script in order to make easier and faster the Kalman filter setting. It takes as input the position and orientation measurements of the cameras and the quadrotor 26

28 (used as ground truth), the time stamp, the centroid of the tracked object given by CAMShift, and a flag indicating for every camera if the tracking is lost in current frame. As output, the estimator provides the position and velocity of the quadrotor in the global coordinate system. In the data acquisition phase, the real position of the quadrotor was also recorded, making possible the computation of the estimation error. This magnitude, distance between tracked quadrotor and cameras, tracking loss flag and other signals are represented graphically for better analysis of the results Description of the experiments The data acquisition experiments were carried out in the CATEC testbed using its Vicon Motion Capture System for obtaining position and orientation of the cameras and the tracked quadrotor. The acquisition program was executed in a workstation provided by CATEC or in a laptop provided by the University of Seville. Two USB cameras Logitech C525 were connected to the computer through five-meter USB cables. This limited the mobility of the cameras during the experiments when following the quadrotor. The cameras were mounted over independent bases whose position was measured by Vicon. The optical axis of the cameras corresponded to the X axis of the base. It is important for the estimation that both axes are parallel. Otherwise an estimation error proportional to the distance between the cameras and the quadrotor is derived. The tracked object was a Hummingbird quadrotor. Two orange balls or a little rubber hat were disposed at the top of the UAV as visual marker in contrast with the blue floor of the testbed, as shown in Figure 8. The cameras tried to stay focused in this marker. One important aspect referred to the cameras is the autofocus. For data acquisition experiments, two webcams models were used: the Genius eface 225 (manually adjustable focus) and the Logitech C525 (autofocus). For applications with moving objects, cameras with fixed focus or manually adjustable are not recommended. On the other hand, the image quality in the case of the Logitech C525 was much better that with the Genius eface

29 Figure 8. Orange rubber hat at the top of the Hummingbird quadrotor used as visual marker The experiments were carried out by three or four persons: - The pilot of the quadrotor - The person in charge of the data acquisition program - Two persons for handing the cameras At the beginning of each experiment, the coordinator indicates to the pilot and the responsible for the cameras the position and motion pattern to be executed, according to the planning defined previously. Then, the resolution of the images and the name of the folder where Vicon data and acquired images will be saved are specified. Each experiment took between 2 and 5 minutes. The total number of images acquired was around 4,. The initial set up and the execution of the experiments were carried out in four hours Analysis of the results The position estimation results are presented here in different conditions explained separately. The experiments were designed to consider a wide range of situations and configurations, with different resolution of the cameras. Graphics corresponding to estimation error also represent the distance between each of the cameras and the quadrotor for magnitude comparison. Typically, the estimation error in position is around.15 m for a mean distance of 5 m from cameras, although, as it will be seen later, the error will strongly depend on the relative position between cameras and quadrotor. The effect of the tracking loss has been represented with a blue or green character * for one of the cameras, and with a red character * if tracking is lost for both cameras. 28

30 Fixed quadrotor with parallel cameras In this experiment, the quadrotor was fixed at the floor. The optical axes of the cameras were in parallel, with a base line around 1,5 meters. The situation is the one described in Figure 9. A resolution of 64x48 was selected. Figure 1 shows the estimation error in XYZ, as well as the distance between each of the cameras and the quadrotor. As seen, the position estimation error in the X and Z axes is around 15 cm, however, it reach the 3 m in the Y axis when the distance from cameras is maximum. In general, the most parallel the optical axes of the cameras are, the higher the error in depth estimation is. Y E Figure 9. Camera configuration with parallel optical axes in the Y-axis of the global frame and fixed quadrotor 29

31 distance [m] Distance to each fo the cameras Distance to camera 1 Distance to camera time [s] Estimation error 2 error [m] time [s] Figure 1. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and parallel optical axes. Blue marks * correspond to instants with tracking loss from one of the cameras Fixed quadrotor with orthogonal camera configuration Now the configuration is the one shown in Figure 11. The optical axes of both cameras are orthogonal, corresponding to the best case for the depth estimation. This fact is confirmed by the results shown in Figure 12, where it can be seen that the estimation error has been reduced considerably. Figure 11. Orthogonal configuration of the cameras 3

32 6 Distance to each fo the cameras distance [m] Distance to camera 1 Distance to camera time [s] Estimation error.5 error [m] time [s] Figure 12. Position estimation error in XYZ (blue, green, red) and distance between cameras and quadrotor (magenta, black) for fixed quadrotor and orthogonal configuration of the cameras Fixed quadrotor with moving cameras This experiment is a combination of the above two. At the beginning, the cameras are in parallel with a short base line. That is why the estimation error shown in Figure 13 is initially high. Then, the cameras are moved until they reach the orthogonal configuration (in t = 17 s), reducing at the same time the error. Figure 14 represent in more detail the evolution of the estimation error when tracking loss occurs from t = 32.5 s until t = 34.5 s. In two seconds, the estimation error in the Y-axis change in 5 cm due to the integration of the speed in the position estimation. In this case, the estimation is computed using monocular images from a single camera. 31

33 6 Distance to each fo the cameras distance [m] time [s] Estimation error 1 error [m] time [s] Figure 13. Estimation error and distance with cameras with fixed quadrotor and moving cameras, initially with parallel optical axes and finally with orthogonal configuration error - distance [m] Estimation error and distance with cameras time [s] Figure 14. Evolution of the position estimation error with multiple tracking losses (marked by a * character) in one of the cameras 32

34 Fixed quadrotor with orthogonal camera configuration and tracking loss The goal of this experiment is to study the effect of long-term tracking loss from one or both cameras over the position estimation. The quadrotor was in a fixed position with the cameras in an orthogonal configuration, as shown in Figure 15. Here, the quadrotor is out of the Field of View (FoV) for the right camera. The estimation error results have been represented in Figure 16. The green and blue characters * represent tracking loss from left or right camera, while red character * correspond to tracking loss from both cameras simultaneously. The distance between each of the cameras to the tracked quadrotor has also been plotted in magenta and black. As it can be seen, the error grows rapidly when the vision-based estimation becomes monocular. The error is increased in 1 meter in around 4 seconds. The number of consecutive frames with tracking loss can be used as a criterion for rejecting the position estimation, defining a maximum threshold. This idea is shown in Figure 17, where it has been represented the number of consecutive frames with tracking loss and a threshold of 15 frames. Figure 15. Orthogonal camera configuration with tracked quadrotor out of the FoV for one of the cameras 33

35 4 Distance to each fo the cameras distance [m] time [s] Estimation error 5 error [m] time [s] Figure 16. Position estimation error and distance with cameras with long duration tracking loss for one of the cameras (blue and green * characters) and both cameras (red * characters) 1 Number of consecutive frames with tracking loss Threshold time [s] Figure 17. Number of consecutive frames with tracking loss (blue) and threshold (red) Z-axis estimation with flying quadrotor and parallel camera configuration In this experiment the quadrotor height is estimated with the cameras being in the configuration indicated in Figure 18 and a resolution of 128x72 pixels. The pilot of the quadrotor was asked to perform movements along the Z-axis with two meters amplitude. 34

IMPROVED VIRTUAL MOUSE POINTER USING KALMAN FILTER BASED GESTURE TRACKING TECHNIQUE

IMPROVED VIRTUAL MOUSE POINTER USING KALMAN FILTER BASED GESTURE TRACKING TECHNIQUE 39 IMPROVED VIRTUAL MOUSE POINTER USING KALMAN FILTER BASED GESTURE TRACKING TECHNIQUE D.R.A.M. Dissanayake, U.K.R.M.H. Rajapaksha 2 and M.B Dissanayake 3 Department of Electrical and Electronic Engineering,

More information

Robot Perception Continued

Robot Perception Continued Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 Background of the Research Agile and precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and law enforcement task to inspection

More information

Virtual Mouse Implementation using Color Pointer Detection

Virtual Mouse Implementation using Color Pointer Detection International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 5, August 2014, PP 23-32 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Virtual Mouse Implementation using

More information

An Introduction to Mobile Robotics

An Introduction to Mobile Robotics An Introduction to Mobile Robotics Who am I. Steve Goldberg 15 years programming robots for NASA/JPL Worked on MSL, MER, BigDog and Crusher Expert in stereo vision and autonomous navigation Currently Telecommuting

More information

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle Miguel A. Olivares-Méndez mig olivares@hotmail.com Pascual Campoy Cervera pascual.campoy@upm.es Iván Mondragón ivanmond@yahoo.com Carol

More information

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

Robert Collins CSE598C, PSU. Introduction to Mean-Shift Tracking

Robert Collins CSE598C, PSU. Introduction to Mean-Shift Tracking Introduction to Mean-Shift Tracking Appearance-Based Tracking current frame + previous location likelihood over object location appearance model (e.g. image template, or Mode-Seeking (e.g. mean-shift;

More information

Using Xsens MTi and MTi-G in autonomous and remotely operated vehicles

Using Xsens MTi and MTi-G in autonomous and remotely operated vehicles Using Xsens MTi and MTi-G in autonomous and remotely operated vehicles Document MT0314P, Revision A, 01 Mar 2012 Xsens Technologies B.V. phone +31 88 97367 00 fax +31 88 97367 01 email info@xsens.com internet

More information

Chess Vision. Chua Huiyan Le Vinh Wong Lai Kuan

Chess Vision. Chua Huiyan Le Vinh Wong Lai Kuan Chess Vision Chua Huiyan Le Vinh Wong Lai Kuan Outline Introduction Background Studies 2D Chess Vision Real-time Board Detection Extraction and Undistortion of Board Board Configuration Recognition 3D

More information

IMU Components An IMU is typically composed of the following components:

IMU Components An IMU is typically composed of the following components: APN-064 IMU Errors and Their Effects Rev A Introduction An Inertial Navigation System (INS) uses the output from an Inertial Measurement Unit (IMU), and combines the information on acceleration and rotation

More information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

More information

Path Tracking for a Miniature Robot

Path Tracking for a Miniature Robot Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking

More information

Mobile Robot FastSLAM with Xbox Kinect

Mobile Robot FastSLAM with Xbox Kinect Mobile Robot FastSLAM with Xbox Kinect Design Team Taylor Apgar, Sean Suri, Xiangdong Xi Design Advisor Prof. Greg Kowalski Abstract Mapping is an interesting and difficult problem in robotics. In order

More information

Control of a quadrotor UAV (slides prepared by M. Cognetti)

Control of a quadrotor UAV (slides prepared by M. Cognetti) Sapienza Università di Roma Corso di Laurea in Ingegneria Elettronica Corso di Fondamenti di Automatica Control of a quadrotor UAV (slides prepared by M. Cognetti) Unmanned Aerial Vehicles (UAVs) autonomous/semi-autonomous

More information

Onboard electronics of UAVs

Onboard electronics of UAVs AARMS Vol. 5, No. 2 (2006) 237 243 TECHNOLOGY Onboard electronics of UAVs ANTAL TURÓCZI, IMRE MAKKAY Department of Electronic Warfare, Miklós Zrínyi National Defence University, Budapest, Hungary Recent

More information

Robust Object Tracking Using Kalman Filters with Dynamic Covariance

Robust Object Tracking Using Kalman Filters with Dynamic Covariance Robust Object Tracking Using Kalman Filters with Dynamic Covariance Sheldon Xu and Anthony Chang Cornell University Abstract This project uses multiple independent object tracking algorithms as inputs

More information

EL5223. Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and1 Mappin / 12

EL5223. Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and1 Mappin / 12 Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and Mapping Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and1 Mappin / 12 Sensors and Actuators Robotic systems

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Video Tracking Software User s Manual. Version 1.0

Video Tracking Software User s Manual. Version 1.0 Video Tracking Software User s Manual Version 1.0 Triangle BioSystems International 2224 Page Rd. Suite 108 Durham, NC 27703 Phone: (919) 361-2663 Fax: (919) 544-3061 www.trianglebiosystems.com Table of

More information

Introduction. www.imagesystems.se

Introduction. www.imagesystems.se Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction TrackEye is the world leading system for motion

More information

REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR INTRODUCTION

REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR INTRODUCTION REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com

More information

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras 1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information

Robust and accurate global vision system for real time tracking of multiple mobile robots

Robust and accurate global vision system for real time tracking of multiple mobile robots Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical

More information

Introduction. www.imagesystems.se

Introduction. www.imagesystems.se Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction Motion is the world leading software for advanced

More information

An approach to stereo-point cloud registration using image homographies

An approach to stereo-point cloud registration using image homographies An approach to stereo-point cloud registration using image homographies Stephen D. Fox a and Damian M. Lyons a a Fordham University, Robotics and Computer Vision Laboratory, Bronx, NY ABSTRACT A mobile

More information

Vision Based Traffic Light Triggering for Motorbikes

Vision Based Traffic Light Triggering for Motorbikes Vision Based Traffic Light Triggering for Motorbikes Tommy Chheng Department of Computer Science and Engineering University of California, San Diego tcchheng@ucsd.edu Abstract Current traffic light triggering

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME by Alex Sirota, alex@elbrus.com Project in intelligent systems Computer Science Department Technion Israel Institute of Technology Under the

More information

How does the Kinect work? John MacCormick

How does the Kinect work? John MacCormick How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan Research Methodology Part III: Thesis Proposal Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan Outline Thesis Phases Thesis Proposal Sections Thesis Flow Chart

More information

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES 1 / 23 CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES MINH DUC HUA 1 1 INRIA Sophia Antipolis, AROBAS team I3S-CNRS Sophia Antipolis, CONDOR team Project ANR SCUAV Supervisors: Pascal MORIN,

More information

AP Series Autopilot System. AP-202 Data Sheet. March,2015. Chengdu Jouav Automation Tech Co.,L.t.d

AP Series Autopilot System. AP-202 Data Sheet. March,2015. Chengdu Jouav Automation Tech Co.,L.t.d AP Series Autopilot System AP-202 Data Sheet March,2015 Chengdu Jouav Automation Tech Co.,L.t.d AP-202 autopilot,from Chengdu Jouav Automation Tech Co., Ltd, provides complete professional-level flight

More information

Calibrating a Camera and Rebuilding a Scene by Detecting a Fixed Size Common Object in an Image

Calibrating a Camera and Rebuilding a Scene by Detecting a Fixed Size Common Object in an Image Calibrating a Camera and Rebuilding a Scene by Detecting a Fixed Size Common Object in an Image Levi Franklin Section 1: Introduction One of the difficulties of trying to determine information about a

More information

8.1 Lens Equation. 8.2 Image Resolution (8.1) z' z r

8.1 Lens Equation. 8.2 Image Resolution (8.1) z' z r Chapter 8 Optics This chapter covers the essentials of geometrical optics. Radiometry is covered in Chapter 9. Machine vision relies on the pinhole camera model which models the geometry of perspective

More information

Localization of mobile robots using machine vision and QR codes

Localization of mobile robots using machine vision and QR codes INFOTEH-JAHORINA Vol. 15, March 016. Localization of mobile robots using machine vision and QR codes Svetislav Ćirić, Nemanja Janković, Nenad Jovičić Department of Electronics School of Electrical Engineering,

More information

Robust Panoramic Image Stitching

Robust Panoramic Image Stitching Robust Panoramic Image Stitching CS231A Final Report Harrison Chau Department of Aeronautics and Astronautics Stanford University Stanford, CA, USA hwchau@stanford.edu Robert Karol Department of Aeronautics

More information

Traffic Monitoring Systems. Technology and sensors

Traffic Monitoring Systems. Technology and sensors Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal

More information

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

More information

Virtual CRASH 3.0 Staging a Car Crash

Virtual CRASH 3.0 Staging a Car Crash Virtual CRASH 3.0 Staging a Car Crash Virtual CRASH Virtual CRASH 3.0 Staging a Car Crash Changes are periodically made to the information herein; these changes will be incorporated in new editions of

More information

Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam

Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam Physics 2A, Sec B00: Mechanics -- Winter 2011 Instructor: B. Grinstein Final Exam INSTRUCTIONS: Use a pencil #2 to fill your scantron. Write your code number and bubble it in under "EXAM NUMBER;" an entry

More information

Vision based distance measurement system using single laser pointer design for underwater vehicle

Vision based distance measurement system using single laser pointer design for underwater vehicle Indian Journal of Marine Sciences Vol. 38(3), September 2009, pp. 324-331 Vision based distance measurement system using single laser pointer design for underwater vehicle Muljowidodo K 1, Mochammad A

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

190 Degree Field Of View Fisheye Lens For 1/3 Format Cameras Specifications:

190 Degree Field Of View Fisheye Lens For 1/3 Format Cameras Specifications: ORIFL190-3 190 Degree Field Of View Fisheye Lens For 1/3 Format Cameras Specifications: Field of View: 190 degrees Focal Plane Field Diameter: 3.4 mm Focal length: 1.24 mm F/number: 2.8 Focus range: 0.5

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

Whitepaper. Image stabilization improving camera usability

Whitepaper. Image stabilization improving camera usability Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3

More information

Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes:

Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes: What is AOI? Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes: Test methods for electronic assemblies: - FT (Functional

More information

V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender

V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM by Ketan Surender A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science (Electrical Engineering)

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

Fast Detection and Tracking of Faces in Uncontrolled Environments for Autonomous Robots Using the CNN-UM

Fast Detection and Tracking of Faces in Uncontrolled Environments for Autonomous Robots Using the CNN-UM Fast Detection and Tracking of Faces in Uncontrolled Environments for Autonomous Robots Using the CNN-UM J. McRaven, M. Scheutz, Gy. Cserey, V. Andronache and W. Porod Department of Electrical Engineering

More information

Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur

Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

E190Q Lecture 5 Autonomous Robot Navigation

E190Q Lecture 5 Autonomous Robot Navigation E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving 3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems

More information

Problem definition: optical flow

Problem definition: optical flow Motion Estimation http://www.sandlotscience.com/distortions/breathing_objects.htm http://www.sandlotscience.com/ambiguous/barberpole.htm Why estimate motion? Lots of uses Track object behavior Correct

More information

Abstract. Introduction

Abstract. Introduction SPACECRAFT APPLICATIONS USING THE MICROSOFT KINECT Matthew Undergraduate Student Advisor: Dr. Troy Henderson Aerospace and Ocean Engineering Department Virginia Tech Abstract This experimental study involves

More information

The Use of Camera Information in Formulating and Solving Sensor Fusion Problems

The Use of Camera Information in Formulating and Solving Sensor Fusion Problems The Use of Camera Information in Formulating and Solving Sensor Fusion Problems Thomas Schön Division of Automatic Control Linköping University Sweden Oc c The Problem Inertial sensors Inertial sensors

More information

Virtual Mouse Using a Webcam

Virtual Mouse Using a Webcam 1. INTRODUCTION Virtual Mouse Using a Webcam Since the computer technology continues to grow up, the importance of human computer interaction is enormously increasing. Nowadays most of the mobile devices

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Virtual Teaching and Painting Platform for the Colour Blind

Virtual Teaching and Painting Platform for the Colour Blind IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 3, Ver. VI (May-Jun. 2014), PP 01-09 Virtual Teaching and Painting Platform for the Colour Blind 1

More information

Human and Moving Object Detection and Tracking Using Image Processing

Human and Moving Object Detection and Tracking Using Image Processing International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-2, Issue-3, March 2014 Human and Moving Object Detection and Tracking Using Image Processing Akash V. Kavitkar,

More information

BABY BOT, A CHILD MONITORING ROBOTIC SYSTEM. Yanfei Liu, Christole Griffith, and Parul Reddy 1. INTRODUCTION

BABY BOT, A CHILD MONITORING ROBOTIC SYSTEM. Yanfei Liu, Christole Griffith, and Parul Reddy 1. INTRODUCTION BABY BOT, A CHILD MONITORING ROBOTIC SYSTEM Yanfei Liu, Christole Griffith, and Parul Reddy Indiana Purdue University Fort Wayne, Indiana; Email: liu@engr.ipfw.edu 1. INTRODUCTION For over eight decades,

More information

Encoders for Linear Motors in the Electronics Industry

Encoders for Linear Motors in the Electronics Industry Technical Information Encoders for Linear Motors in the Electronics Industry The semiconductor industry and automation technology increasingly require more precise and faster machines in order to satisfy

More information

Computer Vision - part II

Computer Vision - part II Computer Vision - part II Review of main parts of Section B of the course School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture Name Course Name 1 1 2

More information

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm 1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Module 1 : A Crash Course in Vectors Lecture 2 : Coordinate Systems

Module 1 : A Crash Course in Vectors Lecture 2 : Coordinate Systems Module 1 : A Crash Course in Vectors Lecture 2 : Coordinate Systems Objectives In this lecture you will learn the following Define different coordinate systems like spherical polar and cylindrical coordinates

More information

Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment

Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment Digital Photogrammetric System Version 6.0.2 USER MANUAL Table of Contents 1. Purpose of the document... 4 2. General information... 4 3. The toolbar... 5 4. Adjustment batch mode... 6 5. Objects displaying

More information

Ultrasonic sonars Ultrasonic sonars

Ultrasonic sonars Ultrasonic sonars Sensors for autonomous vehicles Thomas Hellström Dept. of Computing Science Umeå University Sweden Sensors Problems with mobility Autonomous Navigation Where am I? - Localization Where have I been - Map

More information

UAV Pose Estimation using POSIT Algorithm

UAV Pose Estimation using POSIT Algorithm International Journal of Digital ontent Technology and its Applications. Volume 5, Number 4, April 211 UAV Pose Estimation using POSIT Algorithm *1 M. He, 2. Ratanasawanya, 3 M. Mehrandezh, 4 R. Paranjape

More information

Tracking Algorithms. Lecture17: Stochastic Tracking. Joint Probability and Graphical Model. Probabilistic Tracking

Tracking Algorithms. Lecture17: Stochastic Tracking. Joint Probability and Graphical Model. Probabilistic Tracking Tracking Algorithms (2015S) Lecture17: Stochastic Tracking Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state, tracking result is always same. Local

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

C4 Computer Vision. 4 Lectures Michaelmas Term Tutorial Sheet Prof A. Zisserman. fundamental matrix, recovering ego-motion, applications.

C4 Computer Vision. 4 Lectures Michaelmas Term Tutorial Sheet Prof A. Zisserman. fundamental matrix, recovering ego-motion, applications. C4 Computer Vision 4 Lectures Michaelmas Term 2004 1 Tutorial Sheet Prof A. Zisserman Overview Lecture 1: Stereo Reconstruction I: epipolar geometry, fundamental matrix. Lecture 2: Stereo Reconstruction

More information

Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles

Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles CS11 59 Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles Satadal Saha 1, Subhadip Basu 2 *, Mita Nasipuri 2, Dipak Kumar Basu # 2 # AICTE Emeritus Fellow 1 CSE

More information

Image Projection. Goal: Introduce the basic concepts and mathematics for image projection.

Image Projection. Goal: Introduce the basic concepts and mathematics for image projection. Image Projection Goal: Introduce the basic concepts and mathematics for image projection. Motivation: The mathematics of image projection allow us to answer two questions: Given a 3D scene, how does it

More information

2. Dynamics, Control and Trajectory Following

2. Dynamics, Control and Trajectory Following 2. Dynamics, Control and Trajectory Following This module Flying vehicles: how do they work? Quick refresher on aircraft dynamics with reference to the magical flying space potato How I learned to stop

More information

Visual-Inertial Sensor Fusion for Autonomous Navigation of Computationally Constrained Aerial Vehicles

Visual-Inertial Sensor Fusion for Autonomous Navigation of Computationally Constrained Aerial Vehicles Visual-Inertial Sensor Fusion for Autonomous Navigation of Computationally Constrained Aerial Vehicles Stephan Weiss Alpen Adria Universität Control of Networked Systems Stephan.Weiss@aau.at State Estimation

More information

Multi-Touch Control Wheel Software Development Kit User s Guide

Multi-Touch Control Wheel Software Development Kit User s Guide Multi-Touch Control Wheel Software Development Kit User s Guide V3.0 Bulletin #1204 561 Hillgrove Avenue LaGrange, IL 60525 Phone: (708) 354-1040 Fax: (708) 354-2820 E-mail: instinct@grayhill.com www.grayhill.com/instinct

More information

KINEMATICS OF PARTICLES RELATIVE MOTION WITH RESPECT TO TRANSLATING AXES

KINEMATICS OF PARTICLES RELATIVE MOTION WITH RESPECT TO TRANSLATING AXES KINEMTICS OF PRTICLES RELTIVE MOTION WITH RESPECT TO TRNSLTING XES In the previous articles, we have described particle motion using coordinates with respect to fixed reference axes. The displacements,

More information

Models and Filters for camera-based Multi-target Tracking. Dr.-Ing. Mirko Meuter interactive Summer School 4-6 July, 2012

Models and Filters for camera-based Multi-target Tracking. Dr.-Ing. Mirko Meuter interactive Summer School 4-6 July, 2012 Models and Filters for camera-based Multi-target Tracking Dr.-Ing. Mirko Meuter interactive Summer School 4-6 July, 2012 Outline: Contents of the Presentation From detection to tracking Overview over camera

More information

Course 8. An Introduction to the Kalman Filter

Course 8. An Introduction to the Kalman Filter Course 8 An Introduction to the Kalman Filter Speakers Greg Welch Gary Bishop Kalman Filters in 2 hours? Hah! No magic. Pretty simple to apply. Tolerant of abuse. Notes are a standalone reference. These

More information

Improved Billboard Clouds for Extreme Model Simplification

Improved Billboard Clouds for Extreme Model Simplification Improved Billboard Clouds for Extreme Model Simplification I.-T. Huang, K. L. Novins and B. C. Wünsche Graphics Group, Department of Computer Science, University of Auckland, Private Bag 92019, Auckland,

More information

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

Detailed simulation of mass spectra for quadrupole mass spectrometer systems

Detailed simulation of mass spectra for quadrupole mass spectrometer systems Detailed simulation of mass spectra for quadrupole mass spectrometer systems J. R. Gibson, a) S. Taylor, and J. H. Leck Department of Electrical Engineering and Electronics, The University of Liverpool,

More information

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.

Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical

More information

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

Calibration and Georeferencing of Aerial Digital Cameras

Calibration and Georeferencing of Aerial Digital Cameras 'Photogrammetric Week 05' Dieter Fritsch, Ed. Wichmann Verlag, Heidelberg 2005. Hofmann 105 Calibration and Georeferencing of Aerial Digital Cameras OTTO HOFMANN, Brunnthal ABSTRACT The conventional determination

More information

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses

Aerospace Information Technology Topics for Internships and Bachelor s and Master s Theses Aerospace Information Technology s for Internships and Bachelor s and Master s Theses Version Nov. 2014 The Chair of Aerospace Information Technology addresses several research topics in the area of: Avionic

More information

Application of distance measuring with Matlab/Simulink

Application of distance measuring with Matlab/Simulink Application of distance measuring with Matlab/Simulink Mircea Coman 1, Sergiu-Dan Stan 1, Milos Manic 2, Radu Balan 1 1 Dept. of Mechatronics, Technical University of Cluj-Napoca, Cluj-Napoca, Romania

More information

Synthetic Sensing: Proximity / Distance Sensors

Synthetic Sensing: Proximity / Distance Sensors Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,

More information

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract

More information

Sensors and Cellphones

Sensors and Cellphones Sensors and Cellphones What is a sensor? A converter that measures a physical quantity and converts it into a signal which can be read by an observer or by an instrument What are some sensors we use every

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Poker Vision: Playing Cards and Chips Identification based on Image Processing Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence

More information