An Experimental Comparison of Online Object Tracking Algorithms
|
|
|
- Pauline Washington
- 10 years ago
- Views:
Transcription
1 An Experimental Comparison of Online Object Tracking Algorithms Qing Wang a, Feng Chen a, Wenli Xu a, and Ming-Hsuan Yang b a Tsinghua University, Beijing, China b University of California at Merced, Calfironia, USA ABSTRACT This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the, 1, 2, 3, 4, 5, 6, 7, 8 9 and 1 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms. Keywords: Object tracking, online algorithm, appearance model, performance evaluation. 1. INTRODUCTION The goal of object tracking is to estimate the locations and motion parameters of a target in an image sequence given the initialized position in the first frame. Research in tracking plays a key role in understanding motion and structure of objects. It finds numerous applications including surveillance, 11 human-computer interaction, 12 traffic pattern analysis, 13 recognition, 14 medical image processing, 15 to name a few. Although object tracking has been studied for several decades, and numerous tracking algorithms have been proposed for different tasks, it remains a very challenging problem. There exists no single tracking method that can be successfully applied to all tasks and situations. Therefore, it is crucial to review recent tracking methods, and evaluate their performances to show how novel algorithms can be designed for handling specific tracking scenarios. A typical tracking system consists of three components: object representation, dynamic model, and search mechanism. As such, tracking algorithms can be categorized in numerous ways. Object representation is a key component as it directly corresponds to the core challenge of tracking, i.e., how to match object appearance despite all the influencing factors. Moreover, it also determines what objective function can be used for searching the target of interest in frames. To deal with the problem of appearance variations, recent tracking algorithms focus on adaptive object representation schemes based on generative or discriminative formulations. A dynamic model, either predefined or learned from certain training data, is often used to predict the possible target states (e.g., motion parameters) in order to reduce the search space and computational load. Since it is difficult to adapt an effective dynamic model for fast motion and as a result of faster processors, most current tracking algorithms use random walk model to predict the likely states. Object tracking algorithms can be categorized as either deterministic or stochastic based on their search mechanisms. With the target of interest represented in some feature space, object tracking can always be reduced to a search task and formulated as an optimization problem. That is, the tracking results are often obtained by minimizing or maximizing an objective function based on distance, similarity or classification measures. To optimize the objective function, deterministic methods are formulated and solved with differential algorithms such as gradient descent or its variants. The Kanade-Lucas-Tomasi algorithm 16 and the mean-shift tracking algorithm 17 are deterministic methods, in which the sum of squared distance (SSD) and Bhattacharyya distance are used in the objective functions, respectively. The Kalman filter 18 is also a deterministic method based on linear dynamic models. Gradient descent based deterministic methods are usually efficient, but often suffer from local minimum problems. Different from differential optimization, sampling-based methods Corresponding author: Ming-Hsuan Yang ([email protected]).
2 can be used to avoid local minimum problems at the expense of higher computational load. Stochastic methods usually optimize the objective function by considering observations over multiple frames within a Bayesian formulation. It improves robustness over deterministic methods by its capability of escaping from local minimum with much lower computational complexity than sampling-based methods that operate on each frame independently. The condensation algorithm 19 is an effective stochastic method that deals with nonlinear objective functions and non-gaussian dynamic models. Generative methods track a target object by searching for the region most similar to the reference model in each frame. To deal with the above-mentioned challenges in object tracking, most recent generative methods learn robust static or online appearance models. Black et al. 2 learn a subspace model offline to represent target objects at fixed views. Jepson et al. 21 use a Gaussian mixture model with an online expectation maximization (EM) algorithm to handle target appearance variations during tracking, whereas Ross et al. 1 present an online subspace algorithm to model target appearance. Adam et al. 3 develop the fragments-based appearance model to overcome pose change and partial occlusion problems, and Mei et al. 7 present a template based method using sparse representation as it has been shown to be robust to partial occlusion and image noise. Recently, Kwon et al.extend the conventional particle filter framework with multiple dynamic and observation models to account for appearance variation. 9 While these generative methods perform well, they nevertheless do not take rich scene information into account which can be useful in separating target objects from background clutters. Discriminative methods pose object tracking as a binary classification problem in which the task is to distinguish the target region from the background. Avidan 22 trains a classifier offline with the support vector machine (SVM) and combines it with optical flow for object tracking. Collins et al. 2 propose a tracking method to select discriminative low-level color features online for tracking, whereas Avidan 23 uses an online boosting method to classify pixels belonging to foreground and background. Grabner et al. 4 develop a tracking method based on online boosting, which selects features to account for appearance variations of the object caused by out-of-plane rotations and illumination change. Babenko et al. 8 use multiple instance learning (MIL) to handle ambiguously labeled positive and negative data obtained online to reduce visual drift caused by classifier update. Recently, Kalal et al. 1 treat sampled data during tracking as unlabeled ones and exploit their underlying structure to select positive and negative samples for update. Several criteria such as success rate and center location error have been used in the tracking literature for performance evaluation. However, these methods are often evaluated with a few sequences and it is not clear which algorithm should be used for specific applications. To this end, this paper focuses on evaluating the most recent tracking algorithms in dealing with different challenges. First, we demonstrate why adaptive models are crucial to deal with the inevitable appearance change of the target and background over time. Second, we analyze tracking algorithms using the three above-mentioned components and identify their key roles to different challenging factors. Finally, we compare state-of-the-art tracking algorithms on several challenging sequences with different evaluation criteria. The evaluation and analysis are not only useful for choosing appropriate methods for specific applications but also beneficial for developing new tracking algorithms. 2. ADAPTIVE APPEARANCE MODELS One of the most challenging factors in object tracking is to account for appearance variation of the target object caused by change of illumination, deformation and pose. In addition, occlusion, motion blur and camera view angle also pose significant difficulties for algorithms to track target objects. If a tracking method is designed to account for translational motion, then it is unlikely to handle in-plane and out-of-plane rotations or scale change of objects. For certain applications with limited illumination change, static appearance models based on SIFT, 24 HOG 25 and LBP 26 descriptors may suffice. For applications where objects undergo limited deformation, holistic representations, e.g., histograms, may work well although they do not encode spatial structure of objects. If the object shape does not change much, then representation schemes based on contour or silhouette 19 can be used to account for out-of-plane rotation. When a tracking algorithm is designed to account for in-plane motion and scale change with the similarity transform, a static appearance model may be an appropriate option. However, situations arise where different kinds of variations need to be considered simultaneously. Since it is very difficult to develop a static representation invariant to all appearance change, adaptive models are crucial for robust tracking performance. While a dynamic model is often used mainly to reduce the search space of states, it inevitably affects the tracking results especially when the objects undergo fast motion. In some cases, a dynamic model facilitates reinitialization of a tracker after partial or full occlusions. For example, if the target state can be predicted well even when temporal occlusion occurs, it will be easy to relocate the target when it reappears in the scene. Aside from the predicated states, an object detector
3 or sampling of state variables can also be utilized to handle occlusion. Table 1 summarizes challenges often encountered in object tracking and the corresponding solutions. It is evident that object representation plays a key role to deal with appearance change of the target object and background. Furthermore, an adaptive appearance model that accounts for all appearance variations online is of great importance for robust object tracking. However, online update methods may inadvertently affect the tracking performance. For example, if an appearance model is updated with noisy observations, tracking errors will be accumulated and result in visual drifts. Challenge illumination change object deformation in-plane rotation out-of-plane rotation partial occlusion full occlusion fast object motion or moving background Table 1: Challenges and solutions. Solution Use descriptors which are not sensitive to illumination change; Adapt the appearance model to account for illumination change Use object representation which is not sensitive to deformation; Adapt the object representation to account for deformation Use state model that accounts for the similarity transformation; Adapt object representation to such appearance change Choose object representations which are insensitive to out-of-plane pose change; Adapt object representation to such appearance change Use parts-based model which are not sensitive to partial occlusion; Employ sampling of the state space so that the tracker may be reinitialized when the target reappears Search the state space exhaustively so that the tracker can be reinitialized when the target reappears Sophisticated dynamic model; Search a large region of the state space 3. ONLINE TRACKING ALGORITHMS A typical tracking system is composed of three components: object representation, dynamic model and search mechanism. Since different components can deal with different challenges of object tracking, we analyze recent online tracking algorithms accordingly and show how to choose or design robust online algorithms for specific situations. 3.1 Object Representation An object can be represented by either holistic descriptors or local descriptors. Color histograms and raw pixel values are common holistic descriptors. Color histograms have been used in the mean-shift tracking algorithm 17 and the particlebased method. 27 The advantages of histogram-based representations are their computational efficiency and effectiveness to handle shape deformation as well as partial occlusion. However, they do not exploit the structural appearance information of target objects. In addition, histogram-based representations are not designed to handle scale change although some efforts have been made to address this problem. 28, 29 Holistic appearance models based on raw intensity values are used in the Kanade-Lucas-Tomasi algorithm, 3 the incremental subspace learning tracking method, 1 the incremental tensor subspace learning method 31 and the l 1 -minimization based tracker. 7 However, tracking methods based on holistic representation are sensitive to partial occlusion and motion blur. Filter responses have also been used to represent objects. Haar-like wavelets are used to describe objects for boostingbased tracking methods. 4, 8 Porikli et al. 32 use features based on color and image gradients to characterize object appearance with update for visual tracking. Local descriptors have also been widely used in object tracking recently due to their robustness to pose and illumination change. Local histograms and color information are utilized for generating confidence maps from which likely target locations can be determined. 23 Features based on local histograms are selected to represent objects in the fragments-based method. 3 It has been shown that an effective representation scheme is the key to deal with appearance change in object tracking. 3.2 Adaptive Appearance Model As mentioned above, it is crucial to update appearance model for ensuring robust tracking performance and much attention has been paid in recent years to address this issue. The most straightforward method is to replace the current
4 appearance model (e.g., template) with the visual information from the most recent tracking result. Other update algorithms have also been proposed, such as incremental subspace learning methods, 1, 31 adaptive mixture model, 21 and online boosting-based trackers. 4, 23 However, simple update with recently obtained tracking results can easily lead to significant drifts since it is difficult to determine whether the new data are noisy or not. Consequently, drifting errors are likely to accumulate gradually and tracking algorithms eventually fail to locate the targets. To reduce visual drifts, several algorithms have been developed to facilitate adaptive appearance models in recent years. Matthews et al. 33 propose a tracking method with the Lucas-Kanade algorithm by updating the template with the results from the most recent frames and a fixed reference template extracted from the first frame. In contrast to supervised discriminative object tracking, Grabner et al. 5 formulate the update problem as a semi-supervised task where the drawn samples are treated as unlabeled data. The task is then to update a classifier with both labeled and unlabeled data. Specific prior can also be used in this semi-supervised approach 6 to reduce drifts. Babenko et al. 8 pose the tracking problem within the multiple instance learning (MIL) framework to handle ambiguously labeled positive and negative data obtained online for reducing visual drifts. Recently, Kalal et al. 1 also pose the tracking problem as a semi-supervised learning task and exploit the underlying structure of the unlabeled data to select positive and negative samples for update. While much progress has been made on this topic, it is still a difficult task to determine when and which tracking results should be updated in adaptive appearance models to reduce drifts. 3.3 Motion Model The dimensionality of state vector, x t, at time t depends on the motion model that a tracking method is equipped with. The most commonly adopted models are translational motion (2 parameters), similarity transform (4 parameters), and affine transform (6 parameters). The classic Kanade-Lucas-Tomasi algorithm 16 is designed to estimate object locations although it can be extended to account for affine motion. 33 The tracking methods 1, 7, 31 account for affine transformation of objects between two consecutive frames. If an algorithm is designed to handle translational movements, the tracking results would not be accurate when the objects undergo rotational motion or scale change even if an adaptive appearance model is utilized. We note that certain algorithms are constrained by their design and it may not be easy to use a different motion model to account for complex object movements. For example, the mean-shift based tracking algorithm 17 is not equipped to deal with scale change or in-plane rotation since the objective function is not differentiable with respect to these motion parameters. However, if the objective function of a tracking algorithm is not differentiable with respect to the motion parameters, it may be feasible to use either sampling or stochastic search to solve the optimization problem. 3.4 Dynamic Model A dynamic model is usually utilized to reduce computational complexity in object tracking as it describes the likely state transition, i.e., p(x t x t 1 ), between two consecutive frames where x t is the state vector at time t. Constant velocity and constant acceleration models have been used in the early tracking methods such as Kalman filter-based trackers. In these methods, the state transition is modeled by a Gaussian distribution, p (x t x t 1 ) = N (Φ t 1 x t 1, ξ t 1 ), where Φ t 1 and ξ t 1 are the transfer matrix and noise at time t 1, respectively. Since the assumption of constant velocity or acceleration is rather constrained, most recent tracking algorithms adopt random walk models 1, 7 with particle filters. 3.5 Search Mechanism Since object tracking can be formulated as an optimization problem, the state search strategy depends mainly on the objective function form. In the literature, either deterministic or stochastic methods have been utilized for state search. If the objective function is differentiable with respect to the motion parameters, then gradient descent methods can be used. 16, 17, 33 Otherwise, either sampling 4, 8 or stochastic methods 1, 7 can be used. Deterministic methods based on gradient descent are usually computationally efficient, but suffer from the local minimum problems. Exhaustive search methods are able to achieve good tracking performance at the expense of very high computational load, and thus seldom used in tracking tasks. Sampling-based search methods can achieve good tracking performance when the state variables do not change drastically. Consequently, stochastic search algorithms such as particle filters are trade-offs between these two extremes, with the ability to escape from local minimum without high computational load. Particle filters have been widely 1, 7 9 used in recent online object tracking with demonstrated success.
5 4. EXPERIMENTAL COMPARISON In this section, we empirically compare tracking methods based on the above discussions and demonstrate how to choose and design effective algorithms. We evaluate 1 state-of-the-art tracking algorithms on 15 challenging sequences using different criteria. The test algorithms include: incremental visual tracker (), 1 variance ratio tracker (), 2 fragments-based tracker (), 3 online boosting tracker (), 4 semi-supervised trackers (), 5 extended semisupervised tracker (), 6 l 1 tracker (), 7 multiple instance learning tracker (MIL), 8 visual tracking decomposition algorithm (), 9 and track-learning-detection method (). 1 Based on the above analysis, we categorize these algorithms in Table 2 which describes their object representation, motion model, dynamic model, search mechanism and characteristics. The challenging factors of the test sequences are listed in Table 3. For fair evaluation, we use the the source codes provided by the authors in all experiments. For the tracking methods which use particle filtering (i.e.,,, and ), we use 3 particles in all tests. The other parameters of each tracking method are carefully selected in each method for best performance. It is worth noting that the method is not an online method although the experimental comparison shows the necessity of adaptive appearance models. Table 2: Tracking algorithms. The entries denoted with - indicate no dynamic model is employed. Algorithm Motion Model Object Representation Dynamic Model Searching Mechanism Characteristics affine transform holistic gray-scale image vector Gaussian particle filter generative similarity transform local gray-scale histograms - sampling generative translational motion holistic color histograms - mean-shift discriminative translational motion holistic representation based on Haar-like, HOG - sampling discriminative and LBP descriptors translational motion holistic representation based on Haar-like - sampling discriminative descriptor translational motion holistic representation based on Haar-like, HOG, - sampling discriminative and color histograms affine transform holistic gray-level image vector Gaussian particle filter generative translational motion holistic representation based on Haar-like descriptor - sampling discriminative similarity transform holistic representation based on hue, saturation, Gaussian particle filter generative intensity, and edge template similarity transform holistic representation based on Haar-like descriptor - sampling discriminative Some of the tracking results are shown in Figure 1 and high resolution images as well as videos can be found on our web site ( We use two criteria, tracking success rate and location error with respect to object center, for quantitative evaluations. To compute the success rate, we employ the criterion used in the PASCAL VOC challenge 34 to evaluate whether each tracking result is a success or not. Given the tracked bounding box ROI T and the ground truth bounding box ROI G, the score is defined as score = area(roi T ROIG ) area(roi T ROIG ). (1) The tracking result in one frame is considered as a success when this score is above.5 and the success rate is computed with all the frames. The center location error is defined as the distance between the central locations of the tracked target and the manually labeled ground truth. The success rates and average center location errors of all these trackers are listed in Table 4 and Table 5, respectively. Figure 2 shows the details of the tracking errors.
6 Table 3: The tracking sequences used in our experiments. Sequences Main challenging factors Resolution Number of frames Sylvester in-plane/out-of-plane pose change, fast motion, illumination change Wall-E scale change, out-of-plane pose change David-indoor illumination variation, out-of-plane pose change, partial occlusion surfing fast motion, large scale change, small object, moving camera singer scale change, significant illumination change shaking in-plane pose change, significant illumination change Gymnastic deformation, out-of-plane pose change jumping image blur, fast motion car image blur, partial occlusion faceocc in-plane pose change, partial occlusion PETS29 heavy occlusion, out-of-plane pose change, distraction from similar objects CAVIAR heavy occlusion, distraction from similar objects board background clutter, out-of-plane pose change Avatar occlusion, out-of-plane pose change, illumination change David-outdoor low-contrast images, occlusion, out-of-plane pose change Our experimental results show that the 3 method performs well only in the Sylvester and Gymnastics sequences as it is able to deal with appearance variation due to pose change. While the method is designed to handle partial occlusion, it is not equipped to deal with objects with in-plane rotation. The tracking results in the faceocc sequence show that it does not perform well when the object undergoes both in-plane rotation and partial occlusion simultaneously. For the method, it is designed to account for affine motion and appearance change (e.g., Gymnastics, jumping, and faceocc sequences). Since a holistic representation is used, it does not deal partial occlusion well. On the other hand, the use of Gaussian random walk model with particle filer make it robust to full occlusion to some degree since it is able to search around when the target object reappears in the scene after occlusion. However, as the method uses all the tracking results for appearance update (though with a forgetting factor), it is prone to the effects of noisy observations and tracking errors are likely to accumulate. Therefore, the method does not work well in long videos (e.g., Sylvester and surfing) and the sequences where the objects undergo large out-of-plane pose change (e.g., Wall-E and David-outdoor). As it is a generative method, the method is also less effective in dealing with background clutter and low-contrast images (e.g., board and Avatar sequences). The method selects discriminative color features in each frame, and works better than the method on the Sylvester and board sequences. However, since it does not deal with scale or pose change, the method does not work well in cluttered background. As the method uses holistic color histograms for representation, it is not effective in handling illumination change (e.g., David-indoor, singer, and shaking), low-contrast (e.g., Avatar), and background clutter (e.g., PETS29 and Avatar). The method 4 uses Haar-like features, HOG and color histograms for representation, which are not so sensitive to image blur. As such, this tracker works well in the jumping, car, and Sylvester sequences. However, this method is sensitive to parameter setting and prone to drift when there are similar objects in the scenes (e.g., PETS29 and CAVIAR), or when the object undergoes large pose change in a cluttered background (e.g., board). As the method does not deal with scale change, it does not work well in the Wall-E and singer sequences. The experimental results with the method have less drifting errors than the method. In the surfing sequence, it is able to track the target object throughout the entire image sequence. However, when objects undergo fast appearance change (e.g., Sylvester and David-outdoor), this method does not work as well as the method. The can be regarded as a combination of the and methods. It works better than the above two boosting-based trackers in the Gymnastics sequence where the objects undergo pose change and deformation, in the PETS29 sequence where the targets are occluded along with similar objects, and in the Avatar sequence where the object is occluded as well as observed from different camera view angles. The method works well in the singer sequence where the target appearance can be reconstructed by a small number of the templates using l 1 -minimization even when there is drastic illumination change. It also works well in the Wall-E and
7 Table 4: Success rates (%). Sylvester Wall-E David-indoor surfing singer shaking Gymnastics jumping car faceocc PETS CAVIAR board Avatar David-outdoor Table 5: Average center location errors (in pixels). The entries denoted with - indicate the corresponding method fails from the beginning. Sylvester WallE David-indoor surfing singer shaking Gymnastics jumping car faceocc PETS CAVIAR board Avatar David-outdoor CAVIAR sequences, but is prone to drift in other videos (e.g., surfing, Gymnastics, and car). The experimental results can be explained by the use of rectangular templates for sparse representation as it is not equipped to deal with pose change, deformation, or full occlusion (e.g., PETS29 and board). The method utilizes multiple instance learning to reduce the visual drifts in updating appearance models. However, as the method is not designed to handle large scale change, it does not perform well in the sequences where the targets undergo large scale changes (e.g., Wall-E, singer, and surfing). The method uses multiple dynamic and observation models to account for appearance change. It works well in the singer and shaking sequences where there is significant illumination change, and in the car sequence where image blur and distractors appear in the scenes. The method performs better than the other methods in the Sylvester and Wall-E sequences where the target objects undergo pose change. In the CAVIAR, board and David-outdoor sequences where distractors similar to the target objects appear in the cluttered background, none of these trackers work well. It is of great interest to design more discriminative appearance models to separate the target object from the cluttered background, and update method without accumulating tracking errors.
8 Wall-E David-indoor singer surfing shaking Gymnastics jumping car faceocc PETS29 CAVIAR board Avatar David-outdoor Figure 1: Tracking results on challenging sequences.
9 Sylvester Wall E David indoor surfing singer shaking Gymnastics jumping car faceocc PETS CAVIAR board Avatar David outdoor Figure 2: Error plots of all the test sequences.
10 5. CONCLUSION In this paper, we review tracking methods in terms of their components and identify their roles in handling challenging factors of object tracking. We evaluate state-of-the-art online tracking algorithms with detailed analysis on their performance. The experimental comparisons demonstrate the strength as well as weakness of these tracking algorithms, and shed light on future research directions. REFERENCES [1] D. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, Incremental learning for robust visual tracking, International Journal of Computer Vision 77(1-3), pp , 28. [2] R. T. Collins, Y. Liu, and M. Leordeanu, Online selection of discriminative tracking features, IEEE Transactions on Pattern Analysis and Machine Intelligence 27(1), pp , 25. [3] A. Adam, E. Rivlin, and I. Shimshoni, Robust fragments-based tracking using the integral histogram, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 26. [4] H. Grabner and H. Bischof, On-line boosting and vision, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 26. [5] H. Grabner, C. Leistner, and H. Bischof, Semi-supervised on-line boosting for robust tracking, in Proceedings of European Conference on Computer Vision, pp , 28. [6] S. Stalder, H. Grabner, and L. Van Gool, Beyond semi-supervised tracking: Tracking should be as simple as detection, but not simpler than recognition, in Proceedings of IEEE Workshop on Online Learning for Computer Vision, 29. [7] X. Mei and H. Ling, Robust visual tracking using l1 minimization, in Proceedings of the IEEE International Conference on Computer Vision, pp , 29. [8] B. Babenko, M.-H. Yang, and S. Belongie, Visual tracking with online multiple instance learning, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 29. [9] J. Kwon and K. Lee, Visual tracking decomposition, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 21. [1] Z. Kalal, J. Matas, and K. Mikolajczyk, P-n learning: Bootstrapping binary classifiers by structural constraints, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 21. [11] I. Haritaoglu, D. Harwood, and L. Davis, W4s: A real-time system for detecting and tracking people, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , [12] M. de La Gorce, N. Paragios, and D. Fleet, Model-based hand tracking with texture, shading and self-occlusions, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 28. [13] Z. Sun, G. Bebis, and R. Miller, On-road vehicle detection: A review, IEEE Transactions on Pattern Analysis and Machine Intelligence 28(5), pp , 26. [14] M. Kim, S. Kumar, V. Pavlovic, and H. Rowley, Face tracking and recognition with visual constraints in real-world videos, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1 8, 28. [15] X. Zhou, D. Comaniciu, and A. Gupta, An information fusion framework for robust shape tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence 27(1), pp , 25. [16] B. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, in Proceedings of International Joint Conference on Artificial Intelligence, pp , [17] D. Comaniciu, V. Ramesh, and P. Meer, Kernel-based object tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence 25(5), pp , 23. [18] A. Azarbayejani and A. Pentland, Recursive estimation of motion, structure, and focal length, IEEE Transactions on Pattern Analysis and Machine Intelligence 17(6), pp , [19] M. Isard and A. Blake, Condensation conditional density propagation for visual tracking, International Journal of Computer Vision 29(1), pp. 5 28, [2] M. Black and A. Jepson, Eigentracking: Robust matching and tracking of articulated objects using a view-based representation, in Proceedings of European Conference on Computer Vision, pp , [21] A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi, Robust online appearance models for visual tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence 25(1), pp , 23.
11 [22] S. Avidan, Support vector tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence 26(8), pp , 24. [23] S. Avidan, Ensemble tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence 29(2), pp , 27. [24] D. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision 6(2), pp , 24. [25] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 25. [26] T. Ojala, M. Pietikainen, and T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), pp , 22. [27] P. Pérez, C. Hue, J. Vermaak, and M. Gangnet, Color-based probabilistic tracking, in Proceedings of European Conference on Computer Vision, pp , 22. [28] R. T. Collins, Mean-shift blob tracking through scale space, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 23. [29] S. T. Birchfield and S. Rangarajan, Spatiograms versus histograms for region-based tracking, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2, pp , 2 25 June 25. [3] S. Baker and I. Matthews, Lucas-Kanade 2 years on: A unifying framework, International Journal of Computer Vision 56(3), pp , 24. [31] X. Li, W. Hu, Z. Zhang, X. Zhang, and G. Luo, Robust visual tracking based on incremental tensor subspace learning, in Proceedings of the IEEE International Conference on Computer Vision, 27. [32] F. Porikli, O. Tuzel, and P. Meer, Covariance tracking using model update based on lie algebra, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp , 26. [33] L. Matthews, T. Ishikawa, and S. Baker, The template update problem, IEEE Transactions on Pattern Analysis and Machine Intelligence 26(6), pp , 24. [34] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, The pascal visual object classes (voc) challenge, International Journal of Computer Vision 88(2), pp , 21.
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms
A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms H. Kandil and A. Atwan Information Technology Department, Faculty of Computer and Information Sciences, Mansoura University,El-Gomhoria
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Interactive Offline Tracking for Color Objects
Interactive Offline Tracking for Color Objects Yichen Wei Jian Sun Xiaoou Tang Heung-Yeung Shum Microsoft Research Asia, Beijing, China {yichenw,jiansun,xitang,hshum}@microsoft.com Abstract In this paper,
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
3D Model based Object Class Detection in An Arbitrary View
3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Online Object Tracking: A Benchmark
Online Object Tracking: A Benchmark Yi Wu University of California at Merced [email protected] Jongwoo Lim Hanyang University [email protected] Ming-Hsuan Yang University of California at Merced [email protected]
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mauthner,bischof}@icg.tu-graz.ac.at
Robust Tracking Using Local Sparse Appearance Model and K-Selection
Robust Tracking Using Local Sparse Appearance Model and K-Selection Baiyang Liu 1 Junzhou Huang 1 1 Rutgers, The State University of New Jersey Piscataway, NJ, 08854 baiyang, jzhuang, [email protected]
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Traffic Flow Monitoring in Crowded Cities
Traffic Flow Monitoring in Crowded Cities John A. Quinn and Rose Nakibuule Faculty of Computing & I.T. Makerere University P.O. Box 7062, Kampala, Uganda {jquinn,rnakibuule}@cit.mak.ac.ug Abstract Traffic
Dynamic Objectness for Adaptive Tracking
Dynamic Objectness for Adaptive Tracking Severin Stalder 1,HelmutGrabner 1,andLucVanGool 1,2 1 Computer Vision Laboratory, ETH Zurich, Switzerland {sstalder,grabner,vangool}@vision.ee.ethz.ch 2 ESAT -
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
Object Tracking System Using Motion Detection
Object Tracking System Using Motion Detection Harsha K. Ingle*, Prof. Dr. D.S. Bormane** *Department of Electronics and Telecommunication, Pune University, Pune, India Email: [email protected] **Department
Tracking And Object Classification For Automated Surveillance
Tracking And Object Classification For Automated Surveillance Omar Javed and Mubarak Shah Computer Vision ab, University of Central Florida, 4000 Central Florida Blvd, Orlando, Florida 32816, USA {ojaved,shah}@cs.ucf.edu
Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking
Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Visual Tracking. Frédéric Jurie LASMEA. CNRS / Université Blaise Pascal. France. [email protected]
Visual Tracking Frédéric Jurie LASMEA CNRS / Université Blaise Pascal France [email protected] What is it? Visual tracking is the problem of following moving targets through an image
A Movement Tracking Management Model with Kalman Filtering Global Optimization Techniques and Mahalanobis Distance
Loutraki, 21 26 October 2005 A Movement Tracking Management Model with ing Global Optimization Techniques and Raquel Ramos Pinho, João Manuel R. S. Tavares, Miguel Velhote Correia Laboratório de Óptica
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
Edge tracking for motion segmentation and depth ordering
Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Tracking-Learning-Detection
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 6, NO., JANUARY 200 Tracking-Learning-Detection Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas, Abstract This paper investigates
Video Surveillance System for Security Applications
Video Surveillance System for Security Applications Vidya A.S. Department of CSE National Institute of Technology Calicut, Kerala, India V. K. Govindan Department of CSE National Institute of Technology
Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition
Send Orders for Reprints to [email protected] The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.
Probabilistic Latent Semantic Analysis (plsa)
Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg [email protected] www.multimedia-computing.{de,org} References
Vehicle Tracking Using On-Line Fusion of Color and Shape Features
Vehicle Tracking Using On-Line Fusion of Color and Shape Features Kai She 1, George Bebis 1, Haisong Gu 1, and Ronald Miller 2 1 Computer Vision Laboratory, University of Nevada, Reno, NV 2 Vehicle Design
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
The use of computer vision technologies to augment human monitoring of secure computing facilities
The use of computer vision technologies to augment human monitoring of secure computing facilities Marius Potgieter School of Information and Communication Technology Nelson Mandela Metropolitan University
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
Object Recognition. Selim Aksoy. Bilkent University [email protected]
Image Classification and Object Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Image classification Image (scene) classification is a fundamental
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
FLEXSYS Motion-based Traffic Analysis and Incident Detection
FLEXSYS Motion-based Traffic Analysis and Incident Detection Authors: Lixin Yang and Hichem Sahli, IBBT/VUB-ETRO Contents.1 Introduction......................................... 1.2 Traffic flow modelling
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Distributed Vision Processing in Smart Camera Networks
Distributed Vision Processing in Smart Camera Networks CVPR-07 Hamid Aghajan, Stanford University, USA François Berry, Univ. Blaise Pascal, France Horst Bischof, TU Graz, Austria Richard Kleihorst, NXP
Optical Flow. Shenlong Wang CSC2541 Course Presentation Feb 2, 2016
Optical Flow Shenlong Wang CSC2541 Course Presentation Feb 2, 2016 Outline Introduction Variation Models Feature Matching Methods End-to-end Learning based Methods Discussion Optical Flow Goal: Pixel motion
Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang
Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
International Journal of Innovative Research in Computer and Communication Engineering. (A High Impact Factor, Monthly, Peer Reviewed Journal)
Video Surveillance over Camera Network Using Hadoop Naveen Kumar 1, Elliyash Pathan 1, Lalan Yadav 1, Viraj Ransubhe 1, Sowjanya Kurma 2 1 Assistant Student (BE Computer), ACOE, Pune, India. 2 Professor,
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Object tracking & Motion detection in video sequences
Introduction Object tracking & Motion detection in video sequences Recomended link: http://cmp.felk.cvut.cz/~hlavac/teachpresen/17compvision3d/41imagemotion.pdf 1 2 DYNAMIC SCENE ANALYSIS The input to
Tracking Groups of Pedestrians in Video Sequences
Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Head and Facial Animation Tracking using Appearance-Adaptive Models and Particle Filters
Head and Facial Animation Tracking using Appearance-Adaptive Models and Particle Filters F. Dornaika and F. Davoine CNRS HEUDIASYC Compiègne University of Technology 625 Compiègne Cedex, FRANCE {dornaika,
Randomized Trees for Real-Time Keypoint Recognition
Randomized Trees for Real-Time Keypoint Recognition Vincent Lepetit Pascal Lagger Pascal Fua Computer Vision Laboratory École Polytechnique Fédérale de Lausanne (EPFL) 1015 Lausanne, Switzerland Email:
Vehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
CS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
Vehicle Tracking by Simultaneous Detection and Viewpoint Estimation
Vehicle Tracking by Simultaneous Detection and Viewpoint Estimation Ricardo Guerrero-Gómez-Olmedo, Roberto López-Sastre, Saturnino Maldonado-Bascón, and Antonio Fernández-Caballero 2 GRAM, Department of
Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011
Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR
Automatic parameter regulation for a tracking system with an auto-critical function
Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: [email protected] Abstract In this article we propose
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT Jeffrey B. Flora, Mahbubul Alam, Amr H. Yousef, and Khan M. Iftekharuddin December 2013 DISCLAIMER The contents of this
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Self-paced learning for long-term tracking
Self-paced learning for long-term tracking James Steven Supanc ic III Deva Ramanan Dept. of Computer Science @ University of California, Irvine [email protected] [email protected] Abstract We address
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition
Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition Paulo Marques 1 Instituto Superior de Engenharia de Lisboa / Instituto de Telecomunicações R. Conselheiro Emídio
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite Philip Lenz 1 Andreas Geiger 2 Christoph Stiller 1 Raquel Urtasun 3 1 KARLSRUHE INSTITUTE OF TECHNOLOGY 2 MAX-PLANCK-INSTITUTE IS 3
Real Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
Tracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
Vehicle Tracking in Occlusion and Clutter
Vehicle Tracking in Occlusion and Clutter by KURTIS NORMAN MCBRIDE A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of Applied Science in
