An Active Head Tracking System for Distance Education and Videoconferencing Applications
|
|
|
- Oswald McDaniel
- 10 years ago
- Views:
Transcription
1 An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FIN University of Oulu, Finland {samhut, Abstract We present a system for automatic head tracking with a single pan-tilt-zoom (PTZ) camera. In distance education the PTZ tracking system developed can be used to follow a teacher actively when s/he moves in the classroom. In other videoconferencing applications the system can be utilized to provide a close-up view of the person all the time. Since the color features used in tracking are selected and updated online, the system can adapt to changes rapidly. The information received from the tracking module is used to actively control the PTZ camera in order to keep the person in the camera view. In addition, the system implemented is able to recover from erroneous situations. Preliminary experiments indicate that the PTZ system can perform well under different lighting conditions and large scale changes. 1 Introduction Distance education and videoconferencing are nowadays widely used in universities and other schools, as well as in enterprises. Using wide-angle cameras, it is easy to keep the speaker in the field of view (FOV) all the time. Unfortunately, this approach results in a low resolution image, where the details can be blurry. To cope with this problem, the camera has to be steered automatically. In the lecture room, the lighting conditions can change rapidly due to, for example, slide changes. This is the main reason that only a few algorithms for active tracking are utilized in practice. In our system, a pan-tilt-zoom (PTZ) camera is designed to track the teacher when s/he walks at the front of the classroom. Tracking is carried out based on the information provided by the pan-tilt-zoom camera itself. In general, automatic PTZ tracking can be divided into two categories. The first option, also applied to this work, is to use mechanical PTZ cameras that are controlled through a certain command interface. The other way is to use virtual camera control based on panoramic video capturing. However, the virtual camera control based systems require that the camera s video resolution is higher than the required output resolution. A recent approach to track the teacher is introduced by Microsoft Research [12]. They present a hybrid speaker tracking scheme based on a single PTZ camera in an automated lecture capturing system. Since the camera s video resolution is usually higher than the required output resolution, they frame the output video as a sub-region of the camera s input video. That allows tracking the speaker both digitally and mechanically. According to their paper, digital tracking has the advantage of being smooth, and mechanical tracking can cover a wide area. The tracking is based on the motion information obtained from the camera image. The system introduced in [9] targets applications such as classroom lectures and videoconferencing. First wide-angle video is captured by stitching video images from multiple stationary cameras. After that, a region of interest (ROI) can be digitally cropped from the panoramic video. The ROI is located from motion and color information in the uncompressed domain and macroblock information in the compressed domain. As a result, the system simulates human controlled video recording. One example of active head tracking is [6] which utilizes both color and shape information. The shape of a head is assumed to be an ellipse and a model color histogram is acquired in advance. Then, in the first frame, the appropriate position and scale of the head is determined based on user input. In the following frames, the initial position is selected at the same position of the ellipse in the previous frame. The Mean Shift procedure is applied to make the ellipse position converge to the target center where the color histogram similarity to the model and previous one is maximized. The previous histogram in this case is a color histogram adaptively extracted from the result of the previous
2 Initialization Feature Selection Feature Update Head Tracking PTZ Control Figure 1. Overview of the PTZ tracking system. frame. A classroom or a meeting room environment is not the only place to apply active PTZ tracking. Especially visual surveillance solutions include some active tracking components. For example, the work done in [8] addresses the problem of continuous tracking of moving objects with a PTZ camera. Firstly, a set of good trackable features belonging to the selected target is extracted. Secondly, their system adopts a feature clustering method that is able to discriminate between features associated with the background and features associated with different moving objects. In another surveillance related work [4], the task of the two camera system is to continuously provide zoomed-in highresolution images of the face of a person present in the room. Usually the active tracking methods introduced are based on skin color. The work done in [3] employs the Bhattacharyya coefficient as a similarity measure between the color distribution of the face model and face candidates. A dual-mode implementation of the camera controller determines the pan, tilt, and zoom camera to switch between smooth pursuit and saccadic movements, as a function of the target presence in the fovea region. Another system [11] combines color-based face tracking and face detection to be able to track faces under varying pose. The tracking system implemented and described in this paper is mainly built on existing methods and algorithms which are brought together in a novel way to form a complete PTZ tracking solution. As a result of our work, we have a PTZ tracking system that is able to operate robustly under large scale changes and different lighting conditions. The active tracking methods mentioned above lack the ability to cope with scale changes and update color features in parallel with color-based tracking. Generally speaking, the most important thing is that the system developed does not require any manual control and is able to recover from erroneous situations. In our work, the initialization phase of tracking utilizes the human head shape and motion information. After the initialization step, the color features used in tracking can be selected and updated online. Finally, the location of the person s head is found and tracked by applying a modified version of the Continuously Adaptive Mean Shift (CAMSHIFT) [1] algorithm. On the basis of the tracking result, a PTZ camera is steered to keep the target in the camera view. If tracking failure occurs, the system returns to the initialization phase. Fig. 1 gives an overview of the system. This paper is organized as follows. Section 2 contains a review of the initialization and feature selection methods as well as the description of the feature update and tracking algorithms implemented. In Section 3, we describe our PTZ control hardware and methodology. Preliminary experiments are reported in Section 4. Finally, conclusions are given in Section 5. 2 Feature Selection and Tracking 2.1 Initialization In order to start tracking, the initial position of the target should be found. The main problem when using color information is how to get the object color model without a priori knowledge. Especially in the environments with changing lighting conditions it is nearly impossible to use any fixed color model acquired beforehand. To cope with this problem, some other non-color based method is needed for finding the original position. In this work the target is the speaker s head, which can be localized by utilizing shape and motion information. After the initial position of the head is acquired, the color based tracking method can select the features used. The object detector used in this work has been initially proposed by Viola [10] and improved by Lienhart [7]. The actual implementation of the detector is based on Intel Open Computer Vision Library (OpenCV) [5]. In this case, the size of the face samples the classifier has been trained on is 20x20. The implementation finds rectangular regions in the given image that are likely to contain faces and returns those regions as a sequence of rectangles. In order to be able to find the objects of interest at different sizes, the function scans the image several times on different scales, which is more efficient than resizing the image itself. Each time it considers overlapping regions in the image and applies the classifiers to the regions. After it has proceeded and collected the candidate rectangles (regions that passed the classifier cascade), it groups them and returns a sequence of average rectangles for each large enough group. Unfortunately the aforesaid face detector can occasionally give wrong results, which makes the initialization more difficult. Since the speaker is usually moving when giving a presentation, it is reasonable to use the head detector only inside the areas where motion has been detected. In addition, the head has to be detected in several successive frames, before the tracking phase can take place. This kind of approach decreases the possibility of a false alarm substantially.
3 Get next image Check feature update status Figure 2. Head detection example. Update ready? No Use previous features Yes Adopt new features In the current system, a simple video frame differencing method is utilized for detecting motion. Frame differencing is a computationally inexpensive way to reduce the size of the scanned region, thus ensuring real-time operation of the whole method. Fig. 2 shows an example head detection result from the method described above. Track the object Start new feature update process Figure 3. Overview of feature selection and update. 2.2 Feature Selection and Update When the location of the head has been found in the previous phase, the color-based tracking can take place. To guarantee robustness, the features used in tracking have to be updated on-line. Therefore the solution used in this pantilt-zoom tracking system relies on a modified method originally introduced in [2]. The basic hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Provided with a set of seed features, log likelihood ratios of class conditional sample densities from object and background are computed. These densities are needed to form a new set of candidate features tailored to the local object/background discrimination task. Finally, a two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels, and only the top three are chosen. The set of seed candidate features is composed of linear combinations of camera R, G, B pixel values. As in the original method, in our system the integer weights for different color channels are between -2 and 2. By leaving out the coefficient groups which are linear combinations of each other, a pool of 49 features is left. In the original method described in [2], the feature update process is invoked only periodically. Unfortunately the update process is computationally heavy and therefore time consuming, which means that the object tracked can be lost during the update process. Since the feature update process can take a long time, a new approach to the problem is proposed. In our system, the features used are updated continuously as fast as possible without causing unwanted delays to tracking. The feature update process is run in parallel with the object tracking, as Fig. 3 illustrates. When the new features are available, they are utilized immediately. Such an approach guarantees that the tracking method can adapt to sudden changes occurring in the environment. To make tracking more robust, the original samples in the first frame are combined with the pixel samples from the current frame. 2.3 Head Tracking In our system, the size and location of the object changes during tracking due to both object movement and camera zooming. Therefore, the traditional Mean Shift cannot be utilized. The Continuously Adaptive Mean Shift (CAMSHIFT) algorithm instead has been proposed for tracking of head and face in a perceptual user interface [1]. In order to use the CAMSHIFT algorithm to track colored objects in a video scene, a probability distribution image of the desired color in the video frame must be created. Each pixel in the probability image represents a probability that the color of the pixel from an input frame belongs to the object. In the original CAMSHIFT algorithm the probability image is calculated using only a 1-D histogram of the hue component. Therefore the algorithm may fail in cases where hue alone cannot distinguish the target from the background. In the system implemented, the probability image is obtained from the scaled weight images which are generated for every incoming frame using the features selected. In the weight image object pixels contain positive values whereas background pixels contain negative values. Only the top three features selected are used to compute the weight im-
4 Image Figure 4. Tracked object and one of the corresponding weight images. Set calculation region at search window center but larger in size than the search window Use (X,Y) to set search window center Calculate weight image Find center of mass within the search window ages for the current frame. Fig. 4 illustrates how the weight image on the right side gives a good basis for tracking. The center and size of the head are found via the modified CAMSHIFT algorithm operating on every weight image independently, as shown in Fig. 5. The initial search window size and location for tracking are set using the results reported from the head detector. Since the number of features used is three, we get three weight images for one incoming frame. The results of the tracking algorithm for the different weight images are combined by calculating the average. The current size and location of the tracked object are used to set the size and location of the search window in the next video image. The process is then repeated for continuous tracking. To be able to recover from erroneous situations, the system has to detect tracking failures. In the current approach the target is reported lost when the size of the search window is smaller than a threshold value given in advance. Also the number of object pixels in the weight images has to be clearly above a predetermined threshold in order to continue tracking. 3 Pan-Tilt-Zoom Control 3.1 Hardware The basis of the system is a Sony EVI-D31 PTZ camera, which has been widely used especially in videoconferencing solutions. It offers a wide range of tunable parameters for PTZ control: Pan range: 200 deg in 1760 increments Tilt range: 50 deg in 600 increments Pan speed range: from 0 to 65 deg/sec in 24 discrete steps Tilt speed range: from 0 to 30 deg/sec in 20 discrete steps Zoom range: from f5.4 to f64.8 mm in 12 discrete steps Since there are many variables that can be adjusted, steering the camera can be done smoothly. This is one of the main reasons for selecting this particular camera model. Report X, Y and size Yes Center search window at the center of mass and find area under it Converged? Figure 5. Block diagram of the tracking algorithm. 3.2 Control Strategy and Software Adequate control of the PTZ camera is an essential part of the system implemented. The camera should react to movements of the target depending on how large those movements are. When the person is moving slowly, the camera should be able to follow smoothly. In the opposite situation, the camera control has to be fast to keep the speaker s head in the field of view (FOV). The software components of the PTZ control module are presented in Fig. 6. The user interface of the control system is developed in C++ and is only shortly described here. In practice, it includes buttons for starting and stopping the system, as well as it offers a way to configure the parameters of the system. To provide a communication channel between the hardware components of the system, there is a standard RS- 232C serial port connection between a regular Pentium IV PC and a Sony EVI-D31 camera. The control commands from the PC to the camera are sent using Sony s VISCA protocol. It is worth noting that tracking is not interrupted by the command execution. When the tracked target, the person s head in this case, moves to the boundary of the FOV, the PTZ camera is panned/tilted to keep the speaker inside the FOV. The current position and of the head in the image is received from the tracking method described in the previous section. To ensure smooth operation, the pan and tilt speeds of the camera are adjusted actively when the object is moving. In practice this means that the pan and tilt speeds are increased if the target is getting out of the FOV despite of the current camera movement. When the teacher stops moving, No
5 PC PTZ Camera User Interface Tracking Algorithm Camera Control Library Hardware Communication RS232 (VISCA) Command Interface Figure 6. PTZ control software. Figure 8. Tracking almost fails when the person stands up fast. the zoom level of the camera is set to keep the size of the head in the image the same all the time. The desired size of the object can be determined beforehand. Since the camera pan and tilt speeds are limited to a particular range and the tracking method can fail, there may be situations when the camera loses the person from the FOV. To solve this problem, a simple lost and found strategy is utilized. When the target is lost, the camera is stopped for five seconds. If the teacher does not appear in the view during this time, the camera zooms out and returns to the initial position waiting until the initialization method has detected the object again. 4 Experimental Results Comparing the performance of our system to other similar systems that have been developed is difficult. Every system has its own requirements for the hardware and the physical environment. It is also impossible to run all the possible systems at the same time to compare their real-time performance. However, to find out the performance of our system, some preliminary real-time tests have been conducted in a room where the lighting conditions are not uniform. The tests are run on a standard PC (Pentium IV at 3.0 GHz, 1 GB RAM). The video image size is CIF (352x288) captured by the aforementioned Sony EVI-D31 camera at 25fps. In the test sequence presented in this paper, a person is sitting in front of a computer screen at the beginning. First the person s head is detected when he is looking at the computer screen. After a while the person stands up and starts walking around the room. The camera is panned and tilted to keep him in the FOV. To visualize the operation of the system during the test, we selected some clips from the output video (Fig. 7(a)). When the person walks away from the camera, the camera zoom is adjusted to keep the size of the head constant. In the last situation the person has returned to the starting point, as shown in Fig. 7(b). As mentioned earlier, the lighting conditions in the test room are not stable. In addition, the PTZ camera used includes automatic gain and white balance control, which means that the color features used for tracking have to be updated constantly. During the test, approximately 10 frames pass on average before a feature update is ready. Since the execution time of the feature update process depends on the current size of the object tracked, the exact time between updates can vary. As we can see from the test sequence, the system can adapt to lighting condition changes. Also the PTZ control is able to follow the person without difficulties. However, it is possible that the serial port connection between the PC and the PTZ camera may cause unwanted delays. This problem appears clearly in the sample shown in Fig. 8 where the person stands up rapidly. In this situation in question, the tilt speed is not adjusted quickly enough due to buffering in the command sending and receiving. 5 Discussion and Conclusions In this paper, we have presented a PTZ tracking system which can be used to follow the speaker actively in different videoconferencing applications. The system detects the lecturer using head shape and motion information. The features applied to tracking are selected and updated online which makes the method used tolerant to lighting condition changes. The PTZ module keeps the person s head in the field of view by controlling the pan and tilt speeds as a response to the object s movements. In the future, the goal is to integrate the work presented in this paper to an automated distance education system which currently utilizes only fixed cameras. References [1] G. R. Bradski. Real time face and object tracking as a component of a perceptual user interface. In Fourth IEEE Workshop on Applications of Computer Vision (WACV 98), pages , October [2] R. T. Collins, Y. Liu, and M. Leordeanu. Online selection of discriminative tracking features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10): , October 2005.
6 (a) (b) Figure 7. Tracking results. (a) The person leaves the table and starts moving around the room and (b) the person walks away from the camera and returns to the starting point. [3] D. Comaniciu and V. Ramesh. Robust detection and tracking of human faces with an active camera. In Third IEEE International Workshop on Visual Surveillance, pages 11 18, July [4] M. Greiffenhagen, D. Comaniciu, H. Niemann, and V. Ramesh. Design, analysis, and engineering of video monitoring systems: an approach and a case study. Proceedings of the IEEE, 89(10): , October [5] Intel Corporation. Open computer vision library homepage. 0nline, opencv/index.htm. [6] D. Jeong, Y. K. Yang, D.-G. Kang, and J. B. Ra. Realtime head tracking based on color and shape information. In A. Said and J. G. Apostolopoulos, editors, Proceedings of SPIE: Image and Video Communications and Processing, volume 5685, pages , March [7] R. Lienhart and J. Maydt. An extended set of haar-like features for rapid object detection. In International Conference on Image Processing, volume 1, pages , September [8] C. Micheloni and G. L. Foresti. Zoom on target while tracking. In IEEE International Conference on Image Processing, volume 3, pages , September [9] X. Sun, J. Foote, D. Kimber, and B. S. Manjunath. Region of interest extraction and virtual camera control based on panoramic video capturing. IEEE Transactions on Multimedia, 7(5): , October [10] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages , [11] T. Yang, S. Z. Li, Q. Pan, J. Li, and C. Zhao. Reliable and fast tracking of faces under varying pose. In Seventh International Conference on Automatic Face and Gesture Recognition, pages , April [12] C. Zhang, Y. Rui, L. He, and M. Wallick. Hybrid speaker tracking in an automated lecture room. In International Conference on Multimedia and Expo (ICME), pages 81 84, July 2005.
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Real Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
Parallelized Architecture of Multiple Classifiers for Face Detection
Parallelized Architecture of Multiple s for Face Detection Author(s) Name(s) Author Affiliation(s) E-mail Abstract This paper presents a parallelized architecture of multiple classifiers for face detection
Automatic Maritime Surveillance with Visual Target Detection
Automatic Maritime Surveillance with Visual Target Detection Domenico Bloisi, PhD [email protected] Maritime Scenario Maritime environment represents a challenging scenario for automatic video surveillance
2 Pelco Video Analytics
Solution Video Analytics: Title: Key Extending Words Book Security Rest With in Light Technology Few new technologies have excited the imagination of video security professionals quite like intelligent
Whitepaper. Image stabilization improving camera usability
Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3
CS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
IR-Cut. Day/Night. Filter
FE-201DM 2MP Fisheye Indoor PoE Dome Camera Maximum 15fps@1440x1440 H.264 MPEG4 and MJPEG Encoder Hardware Dewarp via Fisheye Processor Minimum 0.1 Lux for Night Vision ROI (Region of Interest) with e-ptz
2-Megapixel Sony Progressive CMOS Sensor with Super Wide Dynamic Range and High Frame Rate
SD-2020 2-Megapixel 20X Optical Zoom Speed Dome IP Camera 1/2.8" Sony Progressive CMOS Sensor Full HD 1080p + D1 Real-Time at Dual Streaming Up to 20x Optical Zoom Up to 30 fps @ 1080p Full HD Weather-Proof
VIDEO COMMUNICATION SYSTEM-TECHNICAL DOCUMENTATION. Tracking Camera (PCSA-CTG70/CTG70P) PCS-G70/G70P All
Tracking Camera () PCS-G70/G70P All Introduction The Tracking Camera is a camera unit dedicated to the PCS-G70/G70P. It provides the Voice-Directional Detection function, the Face Recognition function,
A Real Time Hand Tracking System for Interactive Applications
A Real Time Hand Tracking System for Interactive Applications Siddharth Swarup Rautaray Indian Institute of Information Technology Allahabad ABSTRACT In vision based hand tracking systems color plays an
ivms-4200 Client Software Technical Specification v1.02
ivms-4200 Client Software Technical Specification v1.02 Introduction ivms-4200 Client Software is a centralized video management software using a distributed structure for surveillance device control and
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
From Product Management Telephone Nuremberg
Release Letter Product: Version: IVA Intelligent Video Analysis 4.50 1. General Intelligent Video Analysis (IVA) version 4.50 is the successor of IVA 4.00. IVA is a continuously growing product with an
Intelligent Monitoring Software. IMZ-RS400 Series IMZ-RS401 IMZ-RS404 IMZ-RS409 IMZ-RS416 IMZ-RS432
Intelligent Monitoring Software IMZ-RS400 Series IMZ-RS401 IMZ-RS404 IMZ-RS409 IMZ-RS416 IMZ-RS432 IMZ-RS400 Series Reality High Frame Rate Audio Support Intelligence Usability Video Motion Filter Alarm
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Object Tracking System Using Motion Detection
Object Tracking System Using Motion Detection Harsha K. Ingle*, Prof. Dr. D.S. Bormane** *Department of Electronics and Telecommunication, Pune University, Pune, India Email: [email protected] **Department
BACnet for Video Surveillance
The following article was published in ASHRAE Journal, October 2004. Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes
IP 67. IR-Cut. Day/Night. Filter
FE-501OD 5MP Fisheye Outdoor IP67 PoE Camera Maximum 15fps@1920x1920 H.264 MPEG4 and MJPEG Encoder Hardware Dewarp via Fisheye Processor Minimum 0.1 Lux for Night Vision ROI (Region of Interest) with e-ptz
idisplay v.2.0 User Guide
idisplay v.2.0 User Guide 2013 i3 International Inc. The contents of this user manual are protected under copyright and computer program laws. www.i3international.com 1.866.840.0004 CANADA 780 Birchmount
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
ARTICLE. Which has better zoom: 18x or 36x?
ARTICLE Which has better zoom: 18x or 36x? Which has better zoom: 18x or 36x? In the world of security cameras, 18x zoom can be equal to 36x. More specifically, a high-resolution security camera with 18x
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
COMPACT GUIDE. Camera-Integrated Motion Analysis
EN 05/13 COMPACT GUIDE Camera-Integrated Motion Analysis Detect the movement of people and objects Filter according to directions of movement Fast, simple configuration Reliable results, even in the event
ACTi Streaming Explorer User s Manual Ver 2.1.09
ACTi Streaming Explorer User s Manual Ver 2.1.09 2011/06/24 Overview Streaming Explorer is a Web tool used to manage devices remotely. The Functions provided include Connection, PTZ, DI/DO, Motion, Camera
AXIS 262+ Network Video Recorder
31433/EN/R4/0803 Complete Network Video Recording Solution Complete Network Video Recording Solution Picture this: A simple and reliable, plug-and-play video surveillance system for hotels, shops, banks,
Indoor Surveillance Security Robot with a Self-Propelled Patrolling Vehicle
C Indoor Surveillance Security Robot with a Self-Propelled Patrolling Vehicle Hou-Tsan Lee, Wei-Chuan Lin, Jing-Siang Huang Department of Information Technology, TakMing University of Science and Technology
Multiple Object Tracking Performance Metrics and Evaluation in a Smart Room Environment
Multiple Object Tracking Performance Metrics and Evaluation in a Smart Room Environment Keni Bernardin, Alexander Elbs, Rainer Stiefelhagen Institut für Theoretische Informatik Interactive Systems Lab
AXIS Camera Station Quick Installation Guide
AXIS Camera Station Quick Installation Guide Copyright Axis Communications AB April 2005 Rev. 3.5 Part Number 23997 1 Table of Contents Regulatory Information.................................. 3 AXIS Camera
Robust Real-Time Face Detection
Robust Real-Time Face Detection International Journal of Computer Vision 57(2), 137 154, 2004 Paul Viola, Michael Jones 授 課 教 授 : 林 信 志 博 士 報 告 者 : 林 宸 宇 報 告 日 期 :96.12.18 Outline Introduction The Boost
Professional Surveillance System User s Manual
Professional Surveillance System User s Manual \ 1 Content Welcome...4 1 Feature...5 2 Installation...6 2.1 Environment...6 2.2 Installation...6 2.3 Un-installation...8 3 Main Window...9 3.1 Interface...9
Open-Set Face Recognition-based Visitor Interface System
Open-Set Face Recognition-based Visitor Interface System Hazım K. Ekenel, Lorant Szasz-Toth, and Rainer Stiefelhagen Computer Science Department, Universität Karlsruhe (TH) Am Fasanengarten 5, Karlsruhe
Intelligent Monitoring Software
Intelligent Monitoring Software IMZ-NS101 IMZ-NS104 IMZ-NS109 IMZ-NS116 IMZ-NS132 click: sony.com/sonysports sony.com/security Stunning video and audio brought to you by the IPELA series of visual communication
FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,
A&H Software House Inc. Web: www.luxriot.com Email: [email protected]. Luxriot
A&H Software House Inc. Web: www.luxriot.com Email: [email protected] Luxriot Luxriot Product Features Luxriot is equipped with Unique, Advanced and Industry Standard Surveillance Features: Luxriot is
OpenEXR Image Viewing Software
OpenEXR Image Viewing Software Florian Kainz, Industrial Light & Magic updated 07/26/2007 This document describes two OpenEXR image viewing software programs, exrdisplay and playexr. It briefly explains
Distributed Vision Processing in Smart Camera Networks
Distributed Vision Processing in Smart Camera Networks CVPR-07 Hamid Aghajan, Stanford University, USA François Berry, Univ. Blaise Pascal, France Horst Bischof, TU Graz, Austria Richard Kleihorst, NXP
Product Characteristics Page 2. Management & Administration Page 2. Real-Time Detections & Alerts Page 4. Video Search Page 6
Data Sheet savvi Version 5.3 savvi TM is a unified video analytics software solution that offers a wide variety of analytics functionalities through a single, easy to use platform that integrates with
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mauthner,bischof}@icg.tu-graz.ac.at
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Using Real Time Computer Vision Algorithms in Automatic Attendance Management Systems
Using Real Time Computer Vision Algorithms in Automatic Attendance Management Systems Visar Shehu 1, Agni Dika 2 Contemporary Sciences and Technologies - South East European University, Macedonia 1 Contemporary
SCode CMS (Central Monitoring System) V3.5.1 (SP-501 and MP-9200)
CMS (Central Monitoring System) V3.5.1 (SP-501 and MP-9200) Technical Characteristic Multi-server and Central-server Structure The multi-server structure is an innovated design. Each Server handles a regional
AUTOMATED ATTENDANCE CAPTURE AND TRACKING SYSTEM
Journal of Engineering Science and Technology EURECA 2014 Special Issue January (2015) 45-59 School of Engineering, Taylor s University AUTOMATED ATTENDANCE CAPTURE AND TRACKING SYSTEM EU TSUN CHIN*, WEI
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
IP Matrix MVC-FIPM. Installation and Operating Manual
IP Matrix MVC-FIPM en Installation and Operating Manual IP Matrix IP Matrix Table of Contents en 3 Table of Contents 1 Preface 5 1.1 About this Manual 5 1.2 Conventions in this Manual 5 1.3 Intended Use
Multi-view Intelligent Vehicle Surveillance System
Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland
Firmware Release Letter BVIP Firmware 5.52. Table of Contents. 1. Introduction 2 1.1 Disclaimer of Warranty 2 1.2 Purpose 2 1.
Firmware Release Letter BVIP Firmware 5.52 Table of Contents 1. Introduction 2 1.1 Disclaimer of Warranty 2 1.2 Purpose 2 1.3 Scope 2 2. About This Service Pack Release 2 3. Hardware s 2 4. Software s
A Method for Controlling Mouse Movement using a Real- Time Camera
A Method for Controlling Mouse Movement using a Real- Time Camera Hojoon Park Department of Computer Science Brown University, Providence, RI, USA [email protected] Abstract This paper presents a new
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences Pritam P. Patil 1, Prof. M.V. Phatak 2 1 ME.Comp, 2 Asst.Professor, MIT, Pune Abstract The face is one of the important
Learning Detectors from Large Datasets for Object Retrieval in Video Surveillance
2012 IEEE International Conference on Multimedia and Expo Learning Detectors from Large Datasets for Object Retrieval in Video Surveillance Rogerio Feris, Sharath Pankanti IBM T. J. Watson Research Center
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
A Real Time Driver s Eye Tracking Design Proposal for Detection of Fatigue Drowsiness
A Real Time Driver s Eye Tracking Design Proposal for Detection of Fatigue Drowsiness Nitin Jagtap 1, Ashlesha kolap 1, Mohit Adgokar 1, Dr. R.N Awale 2 PG Scholar, Dept. of Electrical Engg., VJTI, Mumbai
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
USING COMPUTER VISION IN SECURITY APPLICATIONS
USING COMPUTER VISION IN SECURITY APPLICATIONS Peter Peer, Borut Batagelj, Franc Solina University of Ljubljana, Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25, 1001
INTELLECT TM Software Package
AxxonSoft INTELLECT TM Software Package Quick Start Guide Version 1.0.0 Moscow 2010 1 Contents CONTENTS... 2 1 INTRODUCTION... 3 1.1 Document purpose... 3 1.2 Purpose of the Intellect software package...
Real time vehicle detection and tracking on multiple lanes
Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
Designing and Embodiment of Software that Creates Middle Ware for Resource Management in Embedded System
, pp.97-108 http://dx.doi.org/10.14257/ijseia.2014.8.6.08 Designing and Embodiment of Software that Creates Middle Ware for Resource Management in Embedded System Suk Hwan Moon and Cheol sick Lee Department
A Study of Automatic License Plate Recognition Algorithms and Techniques
A Study of Automatic License Plate Recognition Algorithms and Techniques Nima Asadi Intelligent Embedded Systems Mälardalen University Västerås, Sweden [email protected] ABSTRACT One of the most
Boundless Security Systems, Inc.
Boundless Security Systems, Inc. sharper images with better access and easier installation Product Overview Product Summary Data Sheet Control Panel client live and recorded viewing, and search software
Interactive person re-identification in TV series
Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu
Automated Monitoring System for Fall Detection in the Elderly
Automated Monitoring System for Fall Detection in the Elderly Shadi Khawandi University of Angers Angers, 49000, France [email protected] Bassam Daya Lebanese University Saida, 813, Lebanon Pierre
Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles
CS11 59 Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles Satadal Saha 1, Subhadip Basu 2 *, Mita Nasipuri 2, Dipak Kumar Basu # 2 # AICTE Emeritus Fellow 1 CSE
ARTICLE. 10 reasons to switch to IP-based video
ARTICLE 10 reasons to switch to IP-based video Table of contents 1. High resolution 3 2. Easy to install 4 3. Truly digital 5 4. Camera intelligence 5 5. Fully integrated 7 6. Built-in security 7 7. Crystal-clear
Professional Surveillance System User s Manual
Professional Surveillance System User s Manual Version 4.06 Table of Contents 1 OVERVIEW AND ENVIRONMENT... 1 1.1 Overview...1 1.2 Environment...1 2 INSTALLATION AND UPGRADE... 2 2.1 Installation...2 2.2
Determining optimal window size for texture feature extraction methods
IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec
Video Analytics A New Standard
Benefits The system offers the following overall benefits: Tracker High quality tracking engine UDP s embedded intelligent Video Analytics software is fast becoming the standard for all surveillance and
Tracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
ivms-4200 Client Software Quick Start Guide V1.02
ivms-4200 Client Software Quick Start Guide V1.02 Contents 1 Description... 2 1.1 Running Environment... 2 1.2 Surveillance System Architecture with an Performance of ivms-4200... 3 2 Starting ivms-4200...
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Service Pack Release Letter MIC Series 550 System Controller 2.12.00.07. Table of Contents
Service Pack Release Letter Table of Contents 1. Introduction 2 1.1 Disclaimer of Warranty 2 1.2 Purpose 2 1.3 Scope 2 1.4 Installation Requirements 2 2. About This Service Pack Release 2 3. Hardware Changes
OPERATION MANUAL. MV-410RGB Layout Editor. Version 2.1- higher
OPERATION MANUAL MV-410RGB Layout Editor Version 2.1- higher Table of Contents 1. Setup... 1 1-1. Overview... 1 1-2. System Requirements... 1 1-3. Operation Flow... 1 1-4. Installing MV-410RGB Layout
How does the Kinect work? John MacCormick
How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured
Journal of Chemical and Pharmaceutical Research, 2014, 6(5): 647-651. Research Article
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(5): 647-651 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Comprehensive colliery safety monitoring system
UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003. Yun Zhai, Zeeshan Rasheed, Mubarak Shah
UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003 Yun Zhai, Zeeshan Rasheed, Mubarak Shah Computer Vision Laboratory School of Computer Science University of Central Florida, Orlando, Florida ABSTRACT In this
Video Tracking Software User s Manual. Version 1.0
Video Tracking Software User s Manual Version 1.0 Triangle BioSystems International 2224 Page Rd. Suite 108 Durham, NC 27703 Phone: (919) 361-2663 Fax: (919) 544-3061 www.trianglebiosystems.com Table of
ImagineWorldClient Client Management Software. User s Manual. (Revision-2)
ImagineWorldClient Client Management Software User s Manual (Revision-2) (888) 379-2666 US Toll Free (905) 336-9665 Phone (905) 336-9662 Fax www.videotransmitters.com 1 Contents 1. CMS SOFTWARE FEATURES...4
FB-500A User s Manual
Megapixel Day & Night Fixed Box Network Camera FB-500A User s Manual Quality Service Group Product name: Network Camera (FB-500A Series) Release Date: 2011/7 Manual Revision: V1.0 Web site: Email: www.brickcom.com
FPGA Implementation of Human Behavior Analysis Using Facial Image
RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering
Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.
Modern Technique Of Lecture Attendance Using Face Recognition. Shreya Nallawar, Neha Giri, Neeraj Deshbhratar, Shamal Sane, Trupti Gautre, Avinash Bansod Bapurao Deshmukh College Of Engineering, Sewagram,
How To Create A Security System Based On Suspicious Behavior Detection
Security System Based on Suspicious Behavior Detection Enrique Bermejo, Oscar Déniz and Gloria Bueno [email protected] Universidad de Castilla-La Mancha, Spain. ABSTRACT In recent years, the demand for
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
CMS-DH CENTRAL MANAGEMENT SOFTWARE
CMS-DH CENTRAL MANAGEMENT SOFTWARE CMS-DH is a central management software that allows you to view and manage up to 300 DH200 series DVRs. System Requirements Your system must meet the system requirements
802.3af. Build-in Speaker. PoE
FE-200DM 2-MegaPixel Ceiling Mount Fish Eye PoE IPCAM Panoramic 360 Degrees Full View H.264 & MJPEG Video Compression 2 Megapixels Resolution at 1600 x 1200 pixels Micro SD Card Slot for Local Storage
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
