Evaluating the Performance of Systems for Tracking Football Players and Ball

Size: px
Start display at page:

Download "Evaluating the Performance of Systems for Tracking Football Players and Ball"

Transcription

1 Evaluating the Performance of Systems for Tracking Football Players and Ball Y. Li A. Dore J. Orwell School of Computing D.I.B.E. School of Computing Kingston University University of Genova Kingston University Kingston Upon Thames, U.K. Genova, Italy Kingston Upon Thames, U.K. Abstract In this paper, we discuss the different approaches used for evaluating the results of tracking algorithms, and in particular for analysis of football (soccer) tracking results. The focus of the study is on systems with multiple static cameras. The appropriate data representation and ground truth capture methods are discussed, and evaluation measures that indicate the performance of any given automatic tracker are presented. The evaluation method is demonstrated to compare results of an implemented multi-camera tracker. 1. Introduction Within the field of computer vision, comparison of algorithms requires an appropriate evaluation methodology. In this paper, we consider the different approaches that have been used to evaluate tracking algorithms, and apply this analysis to a relatively new domain: the tracking of football (soccer) players with multiple static cameras. Historically, tracking evaluation has used a diverse range of measures and procedures to establish a performance metric. The choice will inevitably depend on the target application, as the priorities will vary for different applications. Tracking of human motion has received significant attention over the last decade. Our impression of the performance evaluation of tracking techniques is that, to a large extent, rigorous quantitative evaluation has been neglected, or in many cases it has been performed informally on a few visual demonstrations. While this is adequate for demonstrating new techniques, it does not necessarily provide conclusions that are valid across domains. It also introduces a degree of subjectivity into the analysis of the results. As the field of visual surveillance matures, an evaluation of the complete tracking systems is increasingly important for several reasons: comparison of different algorithms and parameters therein, analysis of points of failure and comparison of a single algorithm across different data domains. However, specification and implementation of a full evaluation component for a tracking system can be a difficult task. In the football tracking domain, there are several important criteria and factors that determine the most appropriate evaluation methodology. The accuracy of camera calibration and synchronisation places practical limits on the accuracy with which the ground truth can be established. Innovative procedures are used to mark the 3D position of the ball, from single or multiple views. The intended use for the tracking data influences the most appropriate metric. For example, the identity of each player within a team is critical, if the data is to be used for coaching applications, but it is less so for spectator applications. We demonstrate how distance-based metrics are used to evaluate the relative performance of tracking algorithms or parameters therein. In the next section, we review the work relevant to the evaluation of tracking, followed by the description of the tracking implementation that is used to generate results for the evaluation process. In section 5 we present the steps we use to obtain the ground truth data. The procedure for evaluating the tracking results is described in Section 6 and the evaluation results are presented in Section Previous Work 2.1 Evaluation Methods For the visual surveillance systems, performance evaluation methods can be divided in two categories: evaluation with and without ground truth (GT) data set. This data set will contain information that is compared to the automatic tracker output (ATO), but is guaranteed to be correct up to some estimated tolerance. First, we discuss evaluation methods without access to a GT data source. This approach enables practical tracking evaluation on a large quantity of video data and can be useful in real time determination of tracking failure. There have been several techniques proposed for tracking performance evaluation [3, 17] where GT is not available, or it has been decided not to be collected. In [17], an algorithm for automatic performance evaluation that does not require GT data is presented. It defines several metrics based on the assumptions of direction and speed consistency, motion smoothness, and constancy of shape, area and appearance. Therefore measuring the de /05/$ IEEE 632

2 gree to which these qualities are present in the output from the tracking system, is a valid indicator of the performance of the system. One limitation is that it is only suitable for scenarios in which that assumption is correct, i.e. moving objects that will not change their direction and speed dramatically such as moving vehicles. In addition, it does not necessarily discriminate between successful and unsuccessful tracking, in circumstances where there are multiple moving objects which create occlusions. In [3], the authors propose a technique of using synthetically generated video sequences to evaluate tracking performance. The method constructs sequences containing complex motion scenes, by superimposing motion from isolated targets that have been successfully tracked. It allows the generation of a large variety of data sets representing different tracking scenarios and demonstrates a method to assess the sequences based on the occurrence and duration of dynamic occlusions which are most likely to cause the tracking algorithm to fail. The GT for these data sets is automatically generated from the procedure by which the synthetic sequences are created. A number of methodologies [15, 13, 4] have been defined for tracker performance evaluation provided with GT data. In [15], two approaches to measuring people tracking performance are presented. The first approach requires a full set of GT data as it compares the computed motion trajectories to the GT data, which enables a complete evaluation. The second approach presented is more pragmatic, as it detects specific trajectory events (such as line crossings), and the comparison is based on counting of the events rather than the positions of people, thus emphasising on recognition rather than tracking. In neglecting the tracking process, it fails to perform a full system evaluation. Needham [13] addresses the comparison of the ground truth trajectory and tracker output trajectory. It defines a set of positional tracking evaluation metrics corresponding to different types of trajectories based on human marked up GT data. This evaluation approach differs from the others in that it is not centered about a particular tracked position in space at a given time but on the overall path generated by successive tracking information. In [4], the evaluation of background subtraction and tracking including a track evaluation based on matching GT tracks to ATO tracks are discussed. 2.2 Evaluation of Football Tracking Results There is a significant body of research [8, 7, 12] concentrating on football tracking, among which the performance evaluation of the tracking is mainly based on a comparison between the GT data and tracking result. To evaluate the tracking performance from single and multiple camera views, Iwase and Saito use the spatial distance between GT and ATO as the evaluation measure [7]. In complex crowded scenes, very often moving objects occlude one another. It is however possible to recover from this problem by segmenting the blob representation of the player as described in [5]. In both papers, a component of the evaluation is the percentage of solved occlusions. In [8], the authors evaluate the performance using identity tracking as defined in Section Ground Truth Generation A number of systems are currently available for generating GT manually and/or evaluating tracking performance, of which a few are presented here. The Video Performance Evaluation Resource (ViPER) [2] provides a series of tools to generate GT data. This is represented in XML format, for evaluation and visualisation procedures. The tools operate on single camera view and thus only apply to 2D tracking. The Context Aware Vision using Image-based Active Recognition (CAVIAR) [1] provides benchmark video data sets, along with GT data obtained by hand-labeling the images and described in XML format. In addition to tracks of individuals, its innovation is the definition of groups of people, as a set of individuals reacting to one other [6]. A potential difficulty with this concept lies in establishing unequivocal criteria for interaction between two individuals. The Open Development for Video Surveillance (ODViS) [9] system provides an interactive framework allowing researchers to define GT data and analyse the performance of their tracking system. 3. Tracking Football Players Several researchers have presented results into tracking the positions of football players. Some results are for a single fixed camera [12], others are for a single moving camera [10, 11] and there are results for tracking through multiple cameras [8, 7, 14]. For multiple-camera systems, the architectural design of the tracking system reflects the need to extract the salient information from each data source prior to integration with the other sources. Here, we briefly describe the method presented in [18]. This is implemented and used to demonstrate the evaluation method. In particular, the effect of variation of a parameter (γ) controlling the probabilistic estimate of team category is investigated, to empirically determine the most suitable value. The system is composed of two stages. The first stage of processing extracts features from each camera. The second stage of processing integrates these features into a common representation. The interface between these two stages is a set of bounding boxes and associated properties per frame. These properties are the estimated ground-plane position, an estimate of the category and an indication of whether the object is being occluded

3 The first stage of processing uses a per-pixel Mixture of Gaussians adaptive background to extract the foreground regions. Morphological opening operations and adaptive masks are used to reduce the number of false positive observations. These regions are tracked using a Kalman Tracker, using the centroid position and its velocity and the region size as the state variables. This way, occlusions can be handled as partial observations that improve the tracking result. The category is determined by using the method of histogram intersection to calculate the relative overlap between the observation and each of the five histogram models (one for each category). For every observation, the values of the histogram intersection are normalised over the five categories to sum to one, and treated as a measure of the probability that the observation belongs to a category. To parameterise the relationship between the probability and normalised histogram intersection value, the latter is raised to some power, γ, and then renormalised. The second stage then integrates the features from each of the cameras. The features are projected onto the ground plane, and their error covariance is calculated. This is used as a validation gate on the process to fuse these observations into joint observations, which ideally form a one-toone mapping with all the objects in the scene. These observations are matched on a nearest-mahalanobis-neighbour basis to the Kalman state models that comprise the representation. The models can lose their association with the objects in the scene (false alarms, occlusions and rapid accelerations are typical causes of failure): a heuristic policy for the creation and deletion of these state models must be devised. For each element in the model, the category estimate is calculated as a linear sum of first stage category estimates, inversely weighted by their respective (spatial) covariances. In addition, the estimate is updated as a running average of recent observations, by adding a proportion α from the latest frame. 4. Output Representation In this section, we describe how the positions of the players, the ball and the events they compile, are represented in the prototype system. It is proposed to use the same format of representation for both ground truth (GT) and automatic tracking output (ATO) sources of data. As multiple cameras are used, it is suggested that ground plane co-ordinates are most suitable. The transformation to these co-ordinates is generated by Tsai s method [16] for co-planar calibration. We represent a player s position at time t as the single element i of the GT data set as x i (t). It has a 3D position x i IR 3, an identity (shirt number) p i IN and a category c i {c 1,...,c 6 } (two teams, the goalkeepers, the referees and the ball). Thus, the complete set of ground truth elements comprise the set X(t) ={x i (t) :i =1,...,m(t)}, where m(t) is the number of elements in the set at time t. The ATO at time t is represented by the set Y (t). Likewise, it contains the set of elements y j : j =1,...,n(t), where n(t) is the number of objects present in the ATO at t. Each of these elements includes the position y j, an identity label q j and a category estimate e j (t). Note that, except for the time-dependence of the category estimate, the form of GT and ATO representations are identical. 5. Method of Ground Truth Capture The GT data is generated by an operator using a custom graphical user interface. The operator specifies the realworld co-ordinates of players and ball by clicking on the appropriate view that shows any of the 8 camera outputs. Using the camera calibration co-efficients, the image-space points are transformed in the common ground co-ordinate system. In addition, the depth is required, but much of the activity can be approximated to lie in the ground-plane of the football field. The players are assumed to be situated on the ground plane (i.e. jumping up in the air is currently ignored). This position of each player is defined as the mid-point of the line connecting the two points vertically below his two feet. For cases where the calibration is inconsistent, the nearest camera is used to compute the position. When the ball is on the ground, it is simple to locate its position. However, when the ball is off the ground, locating the ball is not trivial. We have identified two different methods for providing a solution, described below. The first method uses two cameras to triangulate the position (shown in Figure 1), treating the two observations as wide-baseline stereo. The second method uses a trajectory model for the ball to interpolate its position from the two end-points of the path at which its 3D position can be more easily estimated. We use a parabolic model for the ball assuming there is negligible friction. The ball travels in a parabolic curve between t 1 and t 2, at which it bounces against the ground or a player. From knowledge of these two 3D positions and the time interval t 2 t 1, the 3D trajectory is completely determined. It is calculated in three stages as described below. However, first these two 3D points, p 1 and p 2 must be specified by the operator. This is straightforward for a ball on the ground. For a point in the air (such as when the ball is headed by a player), the operator uses the best camera view to select two image points: the ball s point of contact and the ground point vertically below. The first point defines the 3D line that contains p 1 = {p 1x,p 1y,p 1z }; the second point defines its position along that line. Given these points, the first step is to calculate the time t : the highest point in the parabola. This is calculated as: t = p 1z p 2z g(t 1 t 2 ) + t 1 + t 2 2 (1) 3 634

4 Figure 1: Ground plane showing views of view (top) for two cameras, for which subimages are shown (bottom). With two views, 3D points can be triangulated. where g is the gravitational acceleration. Thereupon, the initial vertical velocity v 1z is calculated. Hence, the position at each time-step thereafter is estimated by iteratively calculating the trajectory in finite time intervals, using these initial conditions. 6. Evaluating the Tracking Result We assume the existence of a GT data set and therefore the evaluation of the ATO data entails the comparison of these two data series. In this section, we discuss what factors are important contributors to the evaluation of tracking result, and define evaluation measures accordingly. We consider the specific scenario of the football match and then discuss how other scenarios differ. The primary goal is to assess the algorithm performance. The secondary goal is to identify specific errors and areas of weakness in the tracking performance, to aid the development process Player and Ball Positions One of the more challenging aspects to the problem of tracking football players is that all outfield members of the same team are identically dressed. Naturally, this makes the task of discriminating between these players more difficult. This could be attempted through continuity of position, facial appearance, height or shirt number. However, some applications only require knowledge about the position of the team without needing to identify the individual players within it. In this case, the output of the category of each player will be sufficient. Additionally, the recognition of the team category can be regarded as an preliminary step towards the classification of the individual players. Thus, for the football scenario, we regard there to be two levels of player tracking and recognition. The higher level aims to denote each player, referee and ball with a separate identity, that should be distinguished by the tracking system. This is referred to as identity tracking: each object on the pitch has its own label, that should be preserved throughout the course of the sequence. Identity tracking can be performed without knowledge of the names, because the labels can be arbitrarily assigned at the start of the sequence. In section 6.2 we define the performance evaluation measures P I for identity tracking. The lower level aims only to identify the category of every object tracked on the pitch. In normal play, each object is classified as one of the six categories e i (two teams, two goalkeepers, referees and ball). Two players of the same team are regarded as identical. In this paper, this is referred to as category tracking. In section 6.3 we define the performance evaluation measures P C for category tracking. At this point we introduce the proposed evaluation measures. The primary objective is to define a measure of the accuracy of the representation over the available sequence. We propose that this measure is defined over the two fundamental parameters of the representation, time and space: 1. At any given instant, the spatial accuracy of the ATO can be characterised as that proportion of players that are correctly represented to within Δd metres of the GT data. 2. Over a period of time t 0, the temporal accuracy of the ATO can be characterised as that proportion of its tracks for which its relationship with a GT track is maintained. The proposed evaluation measures will operate on the sets of GT and ATO data, {X(t)} and {Y (t)}, defined in section 4. The spatial accuracy of the tracking result is measured in the ground plane, i.e. there is a distance measure. In the following two sections, it is shown how the two parameters combine in different ways in the case of identity tracking and category tracking Evaluation of Identity Tracking In this section, a definition is presented for P I, the tracking performance evaluation across multiple cameras. Broadly speaking, it is the mean proportion of correctly 4 635

5 tracked objects over a number of frames. We now define a method for calculating that proportion, from a GT data set {X(t 0 ),X(t),...,X(T )} and ATO output {Y (t 0 ),Y(t),...,Y(T )} (See Section 4 for notation). To define what is meant by correct, we introduce the idea of a mapping between the set {p i (t)} of identification numbers of the GT, and the set {q j (t)} of identification numbers of the ATO, both at time t. The identity mapping function M I (t, p i ) evaluates to: the nearest ATO target from the set {q j }, or else the null token, ifnoatotarget is within Δd metres. The mapping is found by calculating m(t) n(t) distance matrix D, between the GT tracks X(t) and ATO tracks Y (t) at time t, such that: D ij = x i (t) y j (t) (2) The matrix D is then used to define the mapping for each GT element M(p i,t) as the closest ATO track q j, within Δd metres, that has not already been mapped to another GT track. The mapping is worked out in a closest-first order. The definition of whether a target i was successfully tracked between two times t 1 and t 2,isifM(t 1,p i ) = M(t 2,p i ), i.e. the same ATO track represent that GT track at these two times. An evaluation score f i (t 1,t 2 ) is assigned to this outcome, i.e. f i (t 1,t 2 )= { 1 if M(t 1,p i )=M(t 2,p i ) 0 otherwise and this is averaged over the available data, to define the performance evaluation of identity tracking, P I : P I (Δd, Δt) = T Δt m(t) t=t 0 T Δt m(t) t=t 0 i=0 i=0 f i(t, t +Δt) 6.3. Evaluation of Category Tracking In the previous section, it was shown how the two fundamental measures of tracking accuracy (time and space) are combined into a single evaluation measure, P I. For category tracking, it has not been possible to similarly combine these measures. Rather, it is shown below how these measures result in two separate evaluation criteria, P C and Q C : (3) (4) 1. At any given instant, the spatial accuracy of the ATO can be characterised as that proportion of the GT tracks that can be uniquely associated with an ATO track of the same category, to within a distance of Δd metres. This statistic can be evaluated over all tracks and the available time period to produce the first measure, P C. 2. The temporal accuracy of the ATO tracks is concerned with the constancy of the their category information over a period of time Δt. A measure, Q C, is designed to indicate the quality of the ATO data in this respect. The definition of the spatial measure of category tracking, P C, proceeds in a similar way to identity tracking measure, P I. However, we replace the identity mapping function M I (t, p i ) with a category mapping function M C (t, c i ) that maps the category of the GT track i against the category estimate e j of the nearest unassociated ATO track j (mapping closest matches first). An evaluation score h i (t) is assigned to this outcome, i.e. { 1 if M C (t, c i )=e j h i (t) = (5) 0 otherwise The spatial performance measure of category tracking is then defined as the proportion of tracks correctly tracked: P C (Δd) = T t=t 0 m(t) i=0 h i(t) T t=t 0 m(t) i=0 The definition of the temporal measure of accuracy, or categorisation discontinuity, does not require GT data set at all. It can be defined by comparing the category estimate for ATO track i at two different times, e i (t) and e i (t +Δt). A discontinuity has occurred if e i (t) e i (t +Δt) or there is no track i at either time. We need to count the number of categorisation discontinuities in the available time period. 7. Results and Analysis The evaluation method has been demonstrated on three different datasets. These illustrate the effect of varying two parameters in the method described in Section 3. The performance evaluation measure for identity tracking is shown in Figure 2(a). Here, P I (Δd, Δt) is plotted as a function of Δt, and keeping Δd fixed at 5 metres. Each value is calculated by averaging over all results in an 80 second window. In this case, it indicates how the result with γ =2.5, is a marginally more effective than a straightforward use of the histogram intersection output. 2(b) shows the evaluation measure for category tracking which is a plot of P C (Δd) as a function of Δd. 2(c) shows the discontinuity measure as a windowed average over an 80 second sequence. An inspection of the tracking results validate the evaluation result, in that the visual impression of the result correlates with the numerical values obtained. Nonetheless, there are several factors affecting the accuracy of the evaluation result, including the accuracy of the ground truth data (especially in highly crowded scenes) and the consistency of the calibration transformations. Even so, the results vindicate the overall methodology for evaluating different tracking results for a given video data set. There remains the more general problem of comparing tracking results across different sequences. Here, the amount of occlusion, image geometry and compression are all significant factors. (6) 5 636

6 proportion of objects correctly tracked result A result B timestamp (sec) (a) Spatio-temporal evaluation P I of Identity Tracking proportion of tracks within distance discontinuities / sec result A result B distance (m) (b) Spatial evaluation P C of Category Tracking result A result C timestamp (sec) (c) Temporal evaluation Q C of the Category Tracking Figure 2: Evaluation measures for result A (γ =1.0), B (γ =2.5) and C (γ =5.0) 8. Conclusion We have presented an analysis of evaluation methods for football tracking. We provide three performance measures appropriate for this context that can be used to assess the accuracy of tracking individuals and also categories of people whose members can be distinguished. Some data sets and performance evaluation service are now available for the research community at There is a strong motivation for generalising this work to other complex tracking problems. References [1] Caviar: Context aware vision using image-based active recognition. Web Resource - verified 29-04, [2] Viper: The video performance evaluation resource. Web Resource - verified 29-04, [3] J. Black, T. Ellis, and P. Rosin. A novel method for video tracking performance evaluation. In The Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, October [4] L. M. Brown, A. W. Senior, Y. Tian, J. Connell, A. Hampapur, H. M. C. Shu, and M. Lu. Performance evaluation of surveillance systems under varying conditions. In IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, January [5] P. Figueroa, N. Leite, R. M. L. Barros, I. Cohen, and G. G. Medioni. Tracking soccer players using the graph representation. In ICPR 2004, pages , [6] R. B. Fisher. Pets04 surveillance ground truth data set. In Proc. PETS04, May [7] S. Iwase and H. Saito. Tracking soccer player using multiple views. In APR Workshop on Machine Vision Applications (MVA02), [8] S. Iwase and H. Saito. Parallel tracking of all soccer players by integrating detected positions in multiple view images. In Proc. ICPR 2004, August [9] C. Jaynes, S. Webb, R. M. Steele, and Q. Xiong. An open development environment for evaluation of video surveillance systems. In Proc. PETS02, June [10] S. Lefevre, C. Fluck, B. Maillard, and N. Vincent. A fast snake-based method to track football players. In Proc. MVA, pages , November [11] S. Lefevre, J. Gerard, A. Piron, and N. Vincent. An extended snake model for real-time multiple object tracking. In Proc. ACIVS, pages , September [12] C. J. Needham and R. D. Boyle. Tracking multiple sports players through occlusion, congestion and scale. In BMVC, [13] C. J. Needham and R. D. Boyle. Performance evaluation metrics and statistics for positional tracker evaluation. In Computer Vision Systems, Third International Conference, ICVS 2003, April [14] Y. Ohno, J. Miura, and Y. Shirai. Tracking players and a ball in soccer games. In Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, pages , [15] S. Pingali and J. Segen. Performance evaluation of people tracking systems. In IEEE Workshop on Applicatons of Computer Vision, pages 33 38, November [16] R. Tsai. An efficient and accurate camera calibration technique for 3d machine vision. In Proc. CVPR, [17] H. Wu and Q. Zheng. Self-evaluation for video tracking systems. In 24th Army Science Conference, November [18] M. Xu, J. Orwell, L. Lowey, and D. Thirde. Architecture and algorithms for tracking football players with multiple cameras. IEE Proceedings on Vision, Image and Signal Processing, 152(2): , April

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,

More information

Urban Vehicle Tracking using a Combined 3D Model Detector and Classifier

Urban Vehicle Tracking using a Combined 3D Model Detector and Classifier Urban Vehicle Tracing using a Combined 3D Model Detector and Classifier Norbert Buch, Fei Yin, James Orwell, Dimitrios Maris and Sergio A. Velastin Digital Imaging Research Centre, Kingston University,

More information

An automatic system for sports analytics in multi-camera tennis videos

An automatic system for sports analytics in multi-camera tennis videos Workshop on Activity Monitoring by Multiple Distributed Sensing (AMMDS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance An automatic system for

More information

Speed Performance Improvement of Vehicle Blob Tracking System

Speed Performance Improvement of Vehicle Blob Tracking System Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

Real-Time Tracking of Pedestrians and Vehicles

Real-Time Tracking of Pedestrians and Vehicles Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present

More information

Real-Time People Localization and Tracking through Fixed Stereo Vision

Real-Time People Localization and Tracking through Fixed Stereo Vision Proc. of International Conference on Industrial & Engineering Applications of Artificial Intelligence & Expert Systems (IEA/AIE), 2005 Real-Time People Localization and Tracking through Fixed Stereo Vision

More information

Tracking and Recognition in Sports Videos

Tracking and Recognition in Sports Videos Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey mustafa.teke@gmail.com b Department of Computer

More information

Tracking Groups of Pedestrians in Video Sequences

Tracking Groups of Pedestrians in Video Sequences Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal

More information

False alarm in outdoor environments

False alarm in outdoor environments Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,

More information

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu

More information

Tracking performance evaluation on PETS 2015 Challenge datasets

Tracking performance evaluation on PETS 2015 Challenge datasets Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,

More information

Rafael Martín & José M. Martínez

Rafael Martín & José M. Martínez A semi-supervised system for players detection and tracking in multi-camera soccer videos Rafael Martín José M. Martínez Multimedia Tools and Applications An International Journal ISSN 1380-7501 DOI 10.1007/s11042-013-1659-6

More information

3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map Electronic Letters on Computer Vision and Image Analysis 7(2):110-119, 2008 3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map Zhencheng

More information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

Traffic Monitoring Systems. Technology and sensors

Traffic Monitoring Systems. Technology and sensors Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal

More information

A Tempo-Topographical Model Inference of a Camera Network for Video Surveillance

A Tempo-Topographical Model Inference of a Camera Network for Video Surveillance International Journal of Computer and Electrical Engineering, Vol. 5, No. 4, August 203 A Tempo-Topographical Model Inference of a Camera Network for Video Surveillance Khalid Al-Shalfan and M. Elarbi-Boudihir

More information

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization Journal of Computer Science 6 (9): 1008-1013, 2010 ISSN 1549-3636 2010 Science Publications Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

More information

Real-Time Airport Security Checkpoint Surveillance Using a Camera Network

Real-Time Airport Security Checkpoint Surveillance Using a Camera Network Real-Time Airport Security Checkpoint Surveillance Using a Camera Network Richard J. Radke Department of Electrical, Computer and Systems Engineering Rensselaer Polytechnic Institute This material is based

More information

Neovision2 Performance Evaluation Protocol

Neovision2 Performance Evaluation Protocol Neovision2 Performance Evaluation Protocol Version 3.0 4/16/2012 Public Release Prepared by Rajmadhan Ekambaram rajmadhan@mail.usf.edu Dmitry Goldgof, Ph.D. goldgof@cse.usf.edu Rangachar Kasturi, Ph.D.

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

Online Play Segmentation for Broadcasted American Football TV Programs

Online Play Segmentation for Broadcasted American Football TV Programs Online Play Segmentation for Broadcasted American Football TV Programs Liexian Gu 1, Xiaoqing Ding 1, and Xian-Sheng Hua 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, China {lxgu,

More information

Real Time Target Tracking with Pan Tilt Zoom Camera

Real Time Target Tracking with Pan Tilt Zoom Camera 2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,

More information

Visual Tracking of Athletes in Volleyball Sport Videos

Visual Tracking of Athletes in Volleyball Sport Videos Visual Tracking of Athletes in Volleyball Sport Videos H.Salehifar 1 and A.Bastanfard 2 1 Faculty of Electrical, Computer and IT, Islamic Azad University, Qazvin Branch, Qazvin, Iran 2 Computer Group,

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

Towards License Plate Recognition: Comparying Moving Objects Segmentation Approaches

Towards License Plate Recognition: Comparying Moving Objects Segmentation Approaches 1 Towards License Plate Recognition: Comparying Moving Objects Segmentation Approaches V. J. Oliveira-Neto, G. Cámara-Chávez, D. Menotti UFOP - Federal University of Ouro Preto Computing Department Ouro

More information

A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof

A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mauthner,bischof}@icg.tu-graz.ac.at

More information

On-line Tracking Groups of Pedestrians with Bayesian Networks

On-line Tracking Groups of Pedestrians with Bayesian Networks On-line Tracking Groups of Pedestrians with Bayesian Networks Pedro M. Jorge ISEL / ISR pmj@isel.ipl.pt Jorge S. Marques IST / ISR jsm@isr.ist.utl.pt Arnaldo J. Abrantes ISEL aja@isel.ipl.pt Abstract A

More information

Automatic Traffic Estimation Using Image Processing

Automatic Traffic Estimation Using Image Processing Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran Pezhman_1366@yahoo.com Abstract As we know the population of city and number of

More information

Neural Network based Vehicle Classification for Intelligent Traffic Control

Neural Network based Vehicle Classification for Intelligent Traffic Control Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN

More information

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University

More information

SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS. J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID

SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS. J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID Renewable Energy Laboratory Department of Mechanical and Industrial Engineering University of

More information

Segmentation of building models from dense 3D point-clouds

Segmentation of building models from dense 3D point-clouds Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute

More information

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods

More information

Distributed Vision-Based Reasoning for Smart Home Care

Distributed Vision-Based Reasoning for Smart Home Care Distributed Vision-Based Reasoning for Smart Home Care Arezou Keshavarz Stanford, CA 9435 arezou@keshavarz.net Ali Maleki Tabar Stanford, CA 9435 maleki@stanford.edu Hamid Aghajan Stanford, CA 9435 aghajan@stanford.edu

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

More information

Interactive person re-identification in TV series

Interactive person re-identification in TV series Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu

More information

University of Leeds SCHOOL OF COMPUTER STUDIES RESEARCH REPORT SERIES Report 2001.21

University of Leeds SCHOOL OF COMPUTER STUDIES RESEARCH REPORT SERIES Report 2001.21 University of Leeds SCHOOL OF COMPUTER STUDIES RESEARCH REPORT SERIES Report 2001.21 Tracking Multiple Vehicles using Foreground, Background and Motion Models 1 by D R Magee December 2001 1 Submitted to

More information

Relational Learning for Football-Related Predictions

Relational Learning for Football-Related Predictions Relational Learning for Football-Related Predictions Jan Van Haaren and Guy Van den Broeck jan.vanhaaren@student.kuleuven.be, guy.vandenbroeck@cs.kuleuven.be Department of Computer Science Katholieke Universiteit

More information

Automatic parameter regulation for a tracking system with an auto-critical function

Automatic parameter regulation for a tracking system with an auto-critical function Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: Daniela.Hall@inrialpes.fr Abstract In this article we propose

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Real-Time Cooperative Multi-Target Tracking by Communicating Active Vision Agents

Real-Time Cooperative Multi-Target Tracking by Communicating Active Vision Agents Real-Time Cooperative Multi-Target Tracking by Communicating Active Vision Agents Norimichi Ukita Graduate School of Information Science Nara Institute of Science and Technology, Japan ukita@is.aist-nara.ac.jp

More information

Creating Synthetic Temporal Document Collections for Web Archive Benchmarking

Creating Synthetic Temporal Document Collections for Web Archive Benchmarking Creating Synthetic Temporal Document Collections for Web Archive Benchmarking Kjetil Nørvåg and Albert Overskeid Nybø Norwegian University of Science and Technology 7491 Trondheim, Norway Abstract. In

More information

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering

More information

High-dimensional labeled data analysis with Gabriel graphs

High-dimensional labeled data analysis with Gabriel graphs High-dimensional labeled data analysis with Gabriel graphs Michaël Aupetit CEA - DAM Département Analyse Surveillance Environnement BP 12-91680 - Bruyères-Le-Châtel, France Abstract. We propose the use

More information

Face Recognition in Low-resolution Images by Using Local Zernike Moments

Face Recognition in Low-resolution Images by Using Local Zernike Moments Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie

More information

Tracking And Object Classification For Automated Surveillance

Tracking And Object Classification For Automated Surveillance Tracking And Object Classification For Automated Surveillance Omar Javed and Mubarak Shah Computer Vision ab, University of Central Florida, 4000 Central Florida Blvd, Orlando, Florida 32816, USA {ojaved,shah}@cs.ucf.edu

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

More information

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry.

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry. Image-Based Photo Hulls Greg Slabaugh, Ron Schafer Georgia Institute of Technology Center for Signal and Image Processing Atlanta, GA 30332 {slabaugh, rws}@ece.gatech.edu Mat Hans Hewlett-Packard Laboratories

More information

Reconstructing 3D Pose and Motion from a Single Camera View

Reconstructing 3D Pose and Motion from a Single Camera View Reconstructing 3D Pose and Motion from a Single Camera View R Bowden, T A Mitchell and M Sarhadi Brunel University, Uxbridge Middlesex UB8 3PH richard.bowden@brunel.ac.uk Abstract This paper presents a

More information

Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur

Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:

More information

Multisensor Data Fusion and Applications

Multisensor Data Fusion and Applications Multisensor Data Fusion and Applications Pramod K. Varshney Department of Electrical Engineering and Computer Science Syracuse University 121 Link Hall Syracuse, New York 13244 USA E-mail: varshney@syr.edu

More information

Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011

Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

Real-time Person Detection and Tracking in Panoramic Video

Real-time Person Detection and Tracking in Panoramic Video 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Real-time Person Detection and Tracking in Panoramic Video Marcus Thaler, Werner Bailer JOANNEUM RESEARCH, DIGITAL Institute for

More information

A Trajectory-Based Ball Detection and Tracking System with Applications to Shot-type Identification in Volleyball Videos

A Trajectory-Based Ball Detection and Tracking System with Applications to Shot-type Identification in Volleyball Videos A Trajectory-Based Ball Detection and Tracking System with Applications to Shot-type Identification in Volleyball Videos Bodhisattwa Chakraborty Dept. of Electronics and Communication Engg. National Institute

More information

Multi-view Intelligent Vehicle Surveillance System

Multi-view Intelligent Vehicle Surveillance System Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland

More information

Human behavior analysis from videos using optical flow

Human behavior analysis from videos using optical flow L a b o r a t o i r e I n f o r m a t i q u e F o n d a m e n t a l e d e L i l l e Human behavior analysis from videos using optical flow Yassine Benabbas Directeur de thèse : Chabane Djeraba Multitel

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

An Open Development Environment for Evaluation of Video Surveillance Systems *

An Open Development Environment for Evaluation of Video Surveillance Systems * An Open Development Environment for Evaluation of Video Surveillance Systems * Christopher Jaynes, Stephen Webb, R. Matt Steele, and Quanren Xiong Metaverse Lab, Dept. of Computer Science University of

More information

Tracking in flussi video 3D. Ing. Samuele Salti

Tracking in flussi video 3D. Ing. Samuele Salti Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries First Semester Development 1A On completion of this subject students will be able to apply basic programming and problem solving skills in a 3 rd generation object-oriented programming language (such as

More information

Monitoring Creatures Great and Small: Computer Vision Systems for Looking at Grizzly Bears, Fish, and Grasshoppers

Monitoring Creatures Great and Small: Computer Vision Systems for Looking at Grizzly Bears, Fish, and Grasshoppers Monitoring Creatures Great and Small: Computer Vision Systems for Looking at Grizzly Bears, Fish, and Grasshoppers Greg Mori, Maryam Moslemi, Andy Rova, Payam Sabzmeydani, Jens Wawerla Simon Fraser University

More information

A Computer Vision System for Monitoring Production of Fast Food

A Computer Vision System for Monitoring Production of Fast Food ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia A Computer Vision System for Monitoring Production of Fast Food Richard Russo Mubarak Shah Niels Lobo Computer

More information

Introduction to Engineering System Dynamics

Introduction to Engineering System Dynamics CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Object tracking in video scenes

Object tracking in video scenes A Seminar On Object tracking in video scenes Presented by Alok K. Watve M.Tech. IT 1st year Indian Institue of Technology, Kharagpur Under the guidance of Dr. Shamik Sural Assistant Professor School of

More information

Florida Math for College Readiness

Florida Math for College Readiness Core Florida Math for College Readiness Florida Math for College Readiness provides a fourth-year math curriculum focused on developing the mastery of skills identified as critical to postsecondary readiness

More information

System Architecture of the System. Input Real time Video. Background Subtraction. Moving Object Detection. Human tracking.

System Architecture of the System. Input Real time Video. Background Subtraction. Moving Object Detection. Human tracking. American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629

More information

Chapter 10: Linear Kinematics of Human Movement

Chapter 10: Linear Kinematics of Human Movement Chapter 10: Linear Kinematics of Human Movement Basic Biomechanics, 4 th edition Susan J. Hall Presentation Created by TK Koesterer, Ph.D., ATC Humboldt State University Objectives Discuss the interrelationship

More information

Object Recognition and Template Matching

Object Recognition and Template Matching Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of

More information

Social Media Mining. Data Mining Essentials

Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

More information

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)? What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images

More information

Slope and Rate of Change

Slope and Rate of Change Chapter 1 Slope and Rate of Change Chapter Summary and Goal This chapter will start with a discussion of slopes and the tangent line. This will rapidly lead to heuristic developments of limits and the

More information

Biometric Authentication using Online Signatures

Biometric Authentication using Online Signatures Biometric Authentication using Online Signatures Alisher Kholmatov and Berrin Yanikoglu alisher@su.sabanciuniv.edu, berrin@sabanciuniv.edu http://fens.sabanciuniv.edu Sabanci University, Tuzla, Istanbul,

More information

Detecting and Tracking Moving Objects for Video Surveillance

Detecting and Tracking Moving Objects for Video Surveillance IEEE Proc. Computer Vision and Pattern Recognition Jun. 3-5, 1999. Fort Collins CO Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen Gérard Medioni University of Southern California

More information

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University

More information

Novel Probabilistic Methods for Visual Surveillance Applications

Novel Probabilistic Methods for Visual Surveillance Applications University of Pannonia Information Science and Technology PhD School Thesis Booklet Novel Probabilistic Methods for Visual Surveillance Applications Ákos Utasi Department of Electrical Engineering and

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

Research Article Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics

Research Article Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics Hindawi Publishing Corporation EURASIP Journal on Image and Video Processing Volume 2008, Article ID 246309, 10 pages doi:10.1155/2008/246309 Research Article Evaluating Multiple Object Tracking Performance:

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

BACnet for Video Surveillance

BACnet for Video Surveillance The following article was published in ASHRAE Journal, October 2004. Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes

More information

Chapter 2. Derivation of the Equations of Open Channel Flow. 2.1 General Considerations

Chapter 2. Derivation of the Equations of Open Channel Flow. 2.1 General Considerations Chapter 2. Derivation of the Equations of Open Channel Flow 2.1 General Considerations Of interest is water flowing in a channel with a free surface, which is usually referred to as open channel flow.

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes 1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky klitomis@cs.ucr.edu Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research

More information

Invited Applications Paper

Invited Applications Paper Invited Applications Paper - - Thore Graepel Joaquin Quiñonero Candela Thomas Borchert Ralf Herbrich Microsoft Research Ltd., 7 J J Thomson Avenue, Cambridge CB3 0FB, UK THOREG@MICROSOFT.COM JOAQUINC@MICROSOFT.COM

More information

Understanding Purposeful Human Motion

Understanding Purposeful Human Motion M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 85 Appears in Fourth IEEE International Conference on Automatic Face and Gesture Recognition Understanding Purposeful Human Motion

More information

E27 SPRING 2013 ZUCKER PROJECT 2 PROJECT 2 AUGMENTED REALITY GAMING SYSTEM

E27 SPRING 2013 ZUCKER PROJECT 2 PROJECT 2 AUGMENTED REALITY GAMING SYSTEM PROJECT 2 AUGMENTED REALITY GAMING SYSTEM OVERVIEW For this project, you will implement the augmented reality gaming system that you began to design during Exam 1. The system consists of a computer, projector,

More information

Observing Human Behavior in Image Sequences: the Video Hermeneutics Challenge

Observing Human Behavior in Image Sequences: the Video Hermeneutics Challenge Observing Human Behavior in Image Sequences: the Video Hermeneutics Challenge Pau Baiget, Jordi Gonzàlez Computer Vision Center, Dept. de Ciències de la Computació, Edifici O, Campus UAB, 08193 Bellaterra,

More information

Continuous Fastest Path Planning in Road Networks by Mining Real-Time Traffic Event Information

Continuous Fastest Path Planning in Road Networks by Mining Real-Time Traffic Event Information Continuous Fastest Path Planning in Road Networks by Mining Real-Time Traffic Event Information Eric Hsueh-Chan Lu Chi-Wei Huang Vincent S. Tseng Institute of Computer Science and Information Engineering

More information

International Journal of Innovative Research in Computer and Communication Engineering. (A High Impact Factor, Monthly, Peer Reviewed Journal)

International Journal of Innovative Research in Computer and Communication Engineering. (A High Impact Factor, Monthly, Peer Reviewed Journal) Video Surveillance over Camera Network Using Hadoop Naveen Kumar 1, Elliyash Pathan 1, Lalan Yadav 1, Viraj Ransubhe 1, Sowjanya Kurma 2 1 Assistant Student (BE Computer), ACOE, Pune, India. 2 Professor,

More information

Online Learning for Fast Segmentation of Moving Objects

Online Learning for Fast Segmentation of Moving Objects Online Learning for Fast Segmentation of Moving Objects Liam Ellis, Vasileios Zografos {liam.ellis,vasileios.zografos}@liu.se CVL, Linköping University, Linköping, Sweden Abstract. This work addresses

More information

Traffic Flow Monitoring in Crowded Cities

Traffic Flow Monitoring in Crowded Cities Traffic Flow Monitoring in Crowded Cities John A. Quinn and Rose Nakibuule Faculty of Computing & I.T. Makerere University P.O. Box 7062, Kampala, Uganda {jquinn,rnakibuule}@cit.mak.ac.ug Abstract Traffic

More information

SmartMonitor An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation

SmartMonitor An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation Sensors 2014, 14, 9922-9948; doi:10.3390/s140609922 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article SmartMonitor An Intelligent Security System for the Protection of Individuals

More information

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles mjhustc@ucla.edu and lunbo

More information

A General Framework for Tracking Objects in a Multi-Camera Environment

A General Framework for Tracking Objects in a Multi-Camera Environment A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework

More information