A Bag of Features Approach to Ambient Fall Detection for Domestic Elder-care
|
|
|
- Aldous Barker
- 10 years ago
- Views:
Transcription
1 A Bag of Features Approach to Ambient Fall Detection for Domestic Elder-care Emmanouil Syngelakis Centre for Vision Speech and Signal Processing University of Surrey Guildford, United Kingdom. John Collomosse Centre for Vision Speech and Signal Processing University of Surrey Guildford, United Kingdom. Abstract Falls in the home are a major source of injury for the elderly. The affordability of commodity video cameras is prompting the development of ambient intelligent environments to monitor the occurence of falls in the home. This paper describes an automated fall detection system, capable of tracking movement and detecting falls in real-time. In particular we explore the application of the Bag of Features paradigm, frequently applied to general activity recognition in Computer Vision, to the domestic fall detection problem. We show that fall detection is feasible using such a framework, evaluted our approach in both controlled test scenarios and domestic scenarios exhibiting uncontrolled fall direction and visually cluttered environments. Keywords-Ambient domestic monitoring, fall detection, activity recognition, bag of features. I. INTRODUCTION Falls are one of the most common and dangerous accidents for older people, especially when living alone. It is estimated that about one third of people aged 65 and above endure a fall at some time each year, and 6 in 10 of these falls occur in the home [1]. These incidents contribute to physical injury and can introduce ongoing psychological problems. The increasing trend toward domestic care, driven by rising costs and an ageing population, motivates new ambient monitoring technologies to detect and alert carers to falls within the home. This paper reports an experiment into the application of Bag of Features (BoF) framework, commonly used in Computer Vision for general activity recognition [2], [3], to the detection of falls in the home. BoF frameworks regularly top the performance tables of international benchmarking competitions in video activity recognition (e.g. TRECVid, VideoOlympics). Such benchmark footage often exhibits visual clutter and moving cameras, yet BoF based approaches are able to identify complex actions such as drinking, kissing, running, etc. in movies. In this paper we apply the BoF paradigm to the problem of domestic fall detection, for which good performance in unconstrained and cluttered domestic environments is essential. Our passive visual monitoring system uses trained descriptors based on motion patterns rather than shape, and so can, for example, discriminate between video of people lying down and falling. Further our system is robust to the direction of the fall relative to the camera, which can be problematic for ambient monitoring systems relying upon on the outline of the body shape. We evaluate our system over two datasets each comprising 100 video clips capturing, equally, examples of falls and other normal domestic activities undertaken by 3 individuals. We show mean average precision (MAP) of 90% for uncluttered outdoor visual environments, and similar levels of performance (89%) for cluttered indoor domestic visual environments. II. RELATED WORK The most common fall alert systems perform no detection and are based on manually operated panic buttons, worn on the wrist or around the neck. A number of passive monitoring technologies have also been developed, the majority of which are mechanically based sensors. An accelerometer based solution in the form of wrist watch [4] was developed by CSEM (Centre Suisse d Electronique et de Microtechniuqe). A similar sensing platform (the ewatch [5]) is composed of multiple sensors such as temperature, light, dual axis accelerometers and a microphone. A sensor fusion technique is used to classify user activity in realtime. Orientation of body is also considered in [6], [7]; these systems also consider the possibility of non-horiztonal end positions, for example detecting falls against walls or furniture. These systems underline the importance of using motion information rather than end-body shape of position in determining if a fall has occurred. Typically mechanical sensors yield best results when worn close to the centre of gravity [5]. An alternative form of mechanical sensor is built into a cane in [8], taking measurements of motion, force and pressure. Three stages of fall must be identified to trigger an alarm; a swift change from vertical to horizontal, the detection of force on ground impact, and a motionless cane. These stages are designed to avoid false positive detections when, for example, the cane is dropped. The majority of vision based fall detection systems rely upon the extraction of a region (silhouette) representing the monitored subject. Analysis of this region s shape indicates a fall. One example of such a system was presented by Lee and Mihailidis [9], using a ceiling mounted camera, which
2 Figure 1. Illustrating the Bag of Features pipeline used in our real-time fall detection system. During training, features are extracted from temporally local windows in the video and clustered via k-means to form a codebook. At run-time the codebook is used to bag features from the current time instant into codewords. The frequency histogram of codewords is classified to determine if the current temporal window contains a fall. could discriminate between five different action types. Wideangle [10] lens cameras and omni-directional cameras [11] have also been explored for ambient monitoring. Spehr et al. [10] using a hybrid background subtraction techniques to determine the person region invariant to clothes, lighting etc. and used body orientation to determine fall occurance. Specifically a change in orientation of over 28 degrees indicated a fall. A similar orientation based system was presented by Zhang et al. [12]. Such shape analysis demands accurate segmentation of the person, and could generate false alarms if a person lies down in the monitored area. Three dimensional (3D) tracking solutions within the living space have also been explored for fall detection, using calibrated video cameras [13], [14] and time of flight cameras [15]. Rougier and Meunier [13] track the head of a subject using a particle filter and identify falls from the trajectory of the head. The main disadvantage of their lab based prototype was the necessity to manually bootstrap the system by indicating the location of the head. Jansen et al. [14] used 3D tracking to distill a set of features comprising head distance to floor, orientation of body and level of activity (motion) to determine occurance of a fall. Although 3D camera systems convey an accurate representation of the environment to the detection algorithm, they require multiple calibrated cameras to be installed within the living space whereas our system consists of a single camera that can be relocated as desired e.g. on a table or mantlepiece within the living room. III. FALL DETECTION SYSTEM We now describe the process for detecting falls in realtime. In contrast to typical fall detection systems that isolate the region of interest corresponding to the person, our system requires no such pre-segmentation of the scene. Rather, we detect stable points of interest anywhere in the scene and extract features from the motion of these points recognising the characteristic motion of a fall. By removing the need for identifying the the person in the scene, our system is able to operate in cluttered visual environments that may frustrate segmentation algorithms based on appearance or motion subtraction. A. Codebook Extraction Our system captures VGA ( ) resolution video frames, and identifies points of interest within each video frame I t at time t using the Kanade-Lucas tracker (KLT) across multiple (octave internal) spatial scales [16]. These points are tracked to the subseqeuent frame yielding a ( set n motion ) vectors F(t) = {f 1, f 2,..., f n } where f i = δi t δx, δit δy ; typically n = [20, 50] resulting in a variable number of features per frame. These features form the basis for detection of falls in our system, being trained to learn and later used recognise the pattern of motion vectors present during a fall. During training we capture several videos of simulated falls and other unrelated domestic actions, resulting in a large set of motion vectors from each frame of each video. Following the standard BoF hard-assignment method, these features are bagged into codewords using unsupervised k-means clustering to form k = 100 groups. The cluster centres in the feature space C = {c 1, c 2,..., c k } are used to assign each feature in the training data to a codeword k = {1..k}. We compute a frequency histogram H(t) = {h 1, h 2,..., h k } for each training video frame by counting the occurance of codewords present within that frame. Due to the varying number of features in each frame, H(t) is
3 Figure 2. Evaluation dataset: examples of falls present with our two 100 video datasets captured in cluttered and uncluttered conditions. Red circles indicate features; green lines indicate the feature trajectories. normalised to enable the comparison of histograms between frames. B. Fall Classification The training video clips are separated into positive (falls present) and negative (falls absent) categories, and a set of histograms H + and H computed using codebook C. Collectively these histograms are points {p 1, p 2,..., p m } R k from which we a mean (µ) and covariance (C). m µ = p i (1) i=1 C = (p µ)(p µ) T (2) from which we compute the eigenvectors {u 1, u 2,..., u k } and eigenvalues {λ 1, λ 2,..., λ k } of C: C = UVU T U = [ ] u 1 u 2... u k λ 1 0 V =... (3) 0 λ k d We compute d such that i=1 λ i 0.95 creating a projection space P = [ ] u 1 u 2... u d from which we compute a reduced dimension representation H retaining 95% of the variance in the training set: Ĥ + = P T H + Ĥ = P T H (4) From the d dimensional training examples {Ĥ+, Ĥ } the respective means {µ +, µ } and covariances {C +, C } are computed, so comprising Eigenmodels for the positive and negative fall detection cases. At run-time, each video frame is classified in real-time as being a positive or negative fall detection case by comparison with these Eigenmodels. First, motion features are extracted and codebook applied to create a query histogram q R k, which is subsequently projected into the d dimensional classification space: ˆq = P T q (5) The Mahalanobis distance to each Eigenmodel is computed to determine whether a fall has taken place. A fall has
4 occured if: (q u + )C 1 + (q u + ) T < (q u )C 1 (q u ) T (6) We low-pass filter this per-frame decision on fall presence by counting the occurance of falls within a short temporal window (10 frames). If more than half of these frames are deemed to contain falls then the alarm is triggered. IV. EXPERIMENTAL RESULTS We evaluted our system over two datasets each containing 100 video clips split evenly between fall and non-fall scenarios. The datasets were created using three participants. Nonfall scenarios included a random spread of activities likely to occur in domestic scenarios such as sitting and reclining on couches and ambualtion within the visual field of the camera. In the case of the cluttered dataset, footage was captured in a living room with typical visual clutter. To enable participants to realistically simulate falling, cushions were placed on the floor of the capture area. In the uncluttered dataset, fall direction was lateral with respect to the camera. In the cluttered dataset, fall direction was unconstrained and occured in all directions; left, right, towards and away from the camera. The performance of the system is summarised in the confusion matrix of Table I. For the cluttered and uncluttered scenarios the mean average precision (MAP) was 89% and 90% respectively, with a split of 20:80 between training and classification test data. The 20% of the dataset used for training was selected at random. The optimized detection of KLT tracker using the OpenCV library, and the relatively few multiplications and additions requried to compute (6) enables processing of video in real-time at a sustained rate of 20 frames per second on a Pentium 4 2.2Ghz laptop with 2Gb RAM. In our experiments video was captured using the low cost embedded web camera of the laptop. V. DISCUSSION AND CONCLUSION We have demonstrated that a Bag of Features (BoF) framework can be effectively applied to the problem of fall detection in real-time, even in the presence of visual clutter and unconstrained fall direction. In contrast to visual detectors based on segmenting and analysing the contour, our system is based on the learned motion signatures of falls. This enables us to be robust to visual clutter and distinguish between individuals lying down voluntarily in a controlled manner or falling quickly. We were surprised at the sustained strong performance of the system when switching from the uncluttered to the cluttered dataset, despite the simiplicity of our features (comprising motion flow only). Future work will explore the possibility of including appearance information in the feature vector, as this has recently been shown [3] to enhance the performance of more general BoF based activity recognition. Results +ve -ve G.Truth +ve ve Results +ve -ve G.Truth +ve ve Table I PRECISION OF THE SYSTEM ON THE CLUTTERED (LEFT) AND UNCLUTTERED (RIGHT) DATASETS (80 VIDEOS PER DATASET). A further refinement could also be in the deployment of our system. Currently we employ a laptop with integrated web camera, running our software in real-time. In future development we could leverage small form factor devices specifically built to co-locate within the shared living space much as modern Carbon Monoxide and smoke alarms do. However we do not believe such enhancements are necessary to further demonstrate the potential of deploying BoF for real-time fall detection in low-cost ambient monitoring solutions. REFERENCES [1] M. Sartini, M. Cristina, A. Spagnolo, P. Cremonisi, C. Costaguta, F. Monacelli, J. Garau, and P. Odetti, The epidemiology of domestic injurious falls in a community dwelling elderly population: an outgrowing economic burden, Europeon Journal of Public Health, [2] C. Schuldt, I. Laptev, and B. Caputo, Recognizing human actions: A local svm approach, in Proc. Intl. Conf on Pattern Recognition (ICPR), vol. 3, 2004, pp [3] K. Mikolajczyk and H. Uemura, Action recognition with appearance motion features and fast search trees, Computer Vision and Image Understanding (CVIU, [4] Fall detection system (centre suisse d electronique et de microtechnique), 2010, Last accessed November [5] J. Chen, K. Kwong, D. Chang, J. Luk, and R. Bajesy, Wearable sensors for reliable fall detection, in Proc. IEEE Engineering in Medicine and Biology, [6] U. Maurer, A. Smailagic, D. Siewiorek, and M. Deisher, Activity recognition and monitoring using multiple sensors on different body, in Proc. Intl. Workshop on Wearable and Implantable Body Sensor Network Positions (BodyNet), [7] Lifeline with autoalert (phillips electronics), 2010, press/2010/ lifeline.wpd, Last accessed November [8] M. Lan, A. Nahapetian, A. Vahdatpour, L. Au, and M. S. W. Kaiser, Smartfall: An automatic fall detection system based on subsequence matching for the smartcane, in Proc. Intl. Workshop on Wearable and Implantable Body Sensor Network Positions (BodyNet), Apr
5 [9] T. Lee and A. Mihailidis, An intelligent emergency response system: preliminary development and testing of automated fall detection, Journal of Telemedicine and Telecare, vol. 11, pp , [10] J. Spehr, M. Govercin, S. Winkelbach, E. Steinhagen- Thiessen, and F. Wahl, Visual fall detection in home environments, in Proc. Intl. Conf. of the Intl. Society of Gerontology, vol. 7, no. 2, Jun. 2008, p [11] Y. C. Huang, S. G. Miaou, and T. Y. Liao, A human fall detection system using an omni-directional camera in practical environments for health care applications, in Proc. IAPR Conf. on Machine Vision Applications, [12] Z. Zhang, E. Becker, R. Arora, and V. Athitsos, Experiments with computer vision methods for fall detection, in Proc. Intl. Conf. on Pervasive Tech. related to Assistive Environments, [13] C. Rougier and J. Meunier, Fall detection using 3d head trajectory extracted from a single camera video sequence, in Proc. Intl. Workshop on Video Processing for Security (V4PS- 06), Quebec, Canada, [14] B. Jansen and R. Deklerck, Home monitoring of elderly people with 3d camera technology, in Proc. BENELUX Biomedical Engineering Symp., [15] G. Diraco, A. Leone, and P. Siciliano, An active vision system for fall detection and posture recognition in elderly healthcare, in Proc. Conf. on Design, Automation and Test in Eurpoe, Mar. 2010, pp [16] B. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, in Proc. Intl. Joint Conf. on Artificial Intelligence (IJCAI), 1981, pp
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Vision based approach to human fall detection
Vision based approach to human fall detection Pooja Shukla, Arti Tiwari CSVTU University Chhattisgarh, [email protected] 9754102116 Abstract Day by the count of elderly people living alone at home
Fall detection in the elderly by head tracking
Loughborough University Institutional Repository Fall detection in the elderly by head tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:
Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm
Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm Choon Kiat Lee 1, Vwen Yen Lee 2 1 Hwa Chong Institution, Singapore [email protected] 2 Institute
Automated Monitoring System for Fall Detection in the Elderly
Automated Monitoring System for Fall Detection in the Elderly Shadi Khawandi University of Angers Angers, 49000, France [email protected] Bassam Daya Lebanese University Saida, 813, Lebanon Pierre
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
Automatic Fall Detector based on Sliding Window Principle
Automatic Fall Detector based on Sliding Window Principle J. Rodriguez 1, M. Mercuri 2, P. Karsmakers 3,4, P.J. Soh 2, P. Leroux 3,5, and D. Schreurs 2 1 UPC, Div. EETAC-TSC, Castelldefels, Spain 2 KU
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Activity recognition in ADL settings. Ben Kröse [email protected]
Activity recognition in ADL settings Ben Kröse [email protected] Content Why sensor monitoring for health and wellbeing? Activity monitoring from simple sensors Cameras Co-design and privacy issues Necessity
Recognition of Day Night Activity Using Accelerometer Principals with Aurdino Development Board
Recognition of Day Night Activity Using Accelerometer Principals with Aurdino Development Board Swapna Shivaji Arote R.S. Bhosale S.E. Pawar Dept. of Information Technology, Dept. of Information Technology,
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation
RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation Creating intelligent solutions with Video Analytics (VCA- Video Content Analysis) Intelligent IP video surveillance is one
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011
Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
An Approach for Utility Pole Recognition in Real Conditions
6th Pacific-Rim Symposium on Image and Video Technology 1st PSIVT Workshop on Quality Assessment and Control by Image and Video Analysis An Approach for Utility Pole Recognition in Real Conditions Barranco
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.
Visual search: what's next? Cees Snoek University of Amsterdam The Netherlands Euvision Technologies The Netherlands Problem statement US flag Tree Aircraft Humans Dog Smoking Building Basketball Table
Automatic parameter regulation for a tracking system with an auto-critical function
Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: [email protected] Abstract In this article we propose
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Quick Start Guide. Indoor. Your unique camera ID is:
Quick Start Guide Indoor Your unique camera ID is: Welcome to Y-cam HomeMonitor Combining professional wireless internet cameras and a secure online account, HomeMonitor allows you to tap in to your home
Human behavior analysis from videos using optical flow
L a b o r a t o i r e I n f o r m a t i q u e F o n d a m e n t a l e d e L i l l e Human behavior analysis from videos using optical flow Yassine Benabbas Directeur de thèse : Chabane Djeraba Multitel
COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION
COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION Tz-Sheng Peng ( 彭 志 昇 ), Chiou-Shann Fuh ( 傅 楸 善 ) Dept. of Computer Science and Information Engineering, National Taiwan University E-mail: [email protected]
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003. Yun Zhai, Zeeshan Rasheed, Mubarak Shah
UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003 Yun Zhai, Zeeshan Rasheed, Mubarak Shah Computer Vision Laboratory School of Computer Science University of Central Florida, Orlando, Florida ABSTRACT In this
A General Framework for Tracking Objects in a Multi-Camera Environment
A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework
Maschinelles Lernen mit MATLAB
Maschinelles Lernen mit MATLAB Jérémy Huard Applikationsingenieur The MathWorks GmbH 2015 The MathWorks, Inc. 1 Machine Learning is Everywhere Image Recognition Speech Recognition Stock Prediction Medical
Interactive person re-identification in TV series
Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Report of Research Results
Report of Research Results Scaling and Deployment of e-guardian to Eldercare Centers and Single Elderly Homes Primary Researcher: Prof. Tan Kok Kiong, Department of Electrical and Computer Engineering
Intrusion Detection via Machine Learning for SCADA System Protection
Intrusion Detection via Machine Learning for SCADA System Protection S.L.P. Yasakethu Department of Computing, University of Surrey, Guildford, GU2 7XH, UK. [email protected] J. Jiang Department
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras
Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras Chenyang Zhang 1, Yingli Tian 1, and Elizabeth Capezuti 2 1 Media Lab, The City University of New York (CUNY), City College New
Advanced Methods for Pedestrian and Bicyclist Sensing
Advanced Methods for Pedestrian and Bicyclist Sensing Yinhai Wang PacTrans STAR Lab University of Washington Email: [email protected] Tel: 1-206-616-2696 For Exchange with University of Nevada Reno Sept. 25,
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
CS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
A Demonstration of a Robust Context Classification System (CCS) and its Context ToolChain (CTC)
A Demonstration of a Robust Context Classification System () and its Context ToolChain (CTC) Martin Berchtold, Henning Günther and Michael Beigl Institut für Betriebssysteme und Rechnerverbund Abstract.
Product Characteristics Page 2. Management & Administration Page 2. Real-Time Detections & Alerts Page 4. Video Search Page 6
Data Sheet savvi Version 5.3 savvi TM is a unified video analytics software solution that offers a wide variety of analytics functionalities through a single, easy to use platform that integrates with
CCTV - Video Analytics for Traffic Management
CCTV - Video Analytics for Traffic Management Index Purpose Description Relevance for Large Scale Events Technologies Impacts Integration potential Implementation Best Cases and Examples 1 of 12 Purpose
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,
Health and Well-being
Activity Monitoring for Behaviour, Health and Well-being Christopher James Professor of Healthcare Technology and Director Institute of Digital Healthcare, WMG University of Warwick Overview activity monitoring
Tracking And Object Classification For Automated Surveillance
Tracking And Object Classification For Automated Surveillance Omar Javed and Mubarak Shah Computer Vision ab, University of Central Florida, 4000 Central Florida Blvd, Orlando, Florida 32816, USA {ojaved,shah}@cs.ucf.edu
COMPACT GUIDE. Camera-Integrated Motion Analysis
EN 05/13 COMPACT GUIDE Camera-Integrated Motion Analysis Detect the movement of people and objects Filter according to directions of movement Fast, simple configuration Reliable results, even in the event
Final Project Report
CPSC545 by Introduction to Data Mining Prof. Martin Schultz & Prof. Mark Gerstein Student Name: Yu Kor Hugo Lam Student ID : 904907866 Due Date : May 7, 2007 Introduction Final Project Report Pseudogenes
A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms
A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms H. Kandil and A. Atwan Information Technology Department, Faculty of Computer and Information Sciences, Mansoura University,El-Gomhoria
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
A Guide to Community Alarm and Telecare Equipment
Telecare in North West Surrey Contact Information If you would like further advice or information on any of the products mentioned in this brochure please contact your local North West Surrey Borough Council
Document Image Retrieval using Signatures as Queries
Document Image Retrieval using Signatures as Queries Sargur N. Srihari, Shravya Shetty, Siyuan Chen, Harish Srinivasan, Chen Huang CEDAR, University at Buffalo(SUNY) Amherst, New York 14228 Gady Agam and
Semantic Video Annotation by Mining Association Patterns from Visual and Speech Features
Semantic Video Annotation by Mining Association Patterns from and Speech Features Vincent. S. Tseng, Ja-Hwung Su, Jhih-Hong Huang and Chih-Jen Chen Department of Computer Science and Information Engineering
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
WAITER: A Wearable Personal Healthcare and Emergency Aid System
Sixth Annual IEEE International Conference on Pervasive Computing and Communications WAITER: A Wearable Personal Healthcare and Emergency Aid System Wanhong Wu 1, Jiannong Cao 1, Yuan Zheng 1, Yong-Ping
Introduction to Pattern Recognition
Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
Honeywell Video Analytics
INTELLIGENT VIDEO ANALYTICS Honeywell s suite of video analytics products enables enhanced security and surveillance solutions by automatically monitoring video for specific people, vehicles, objects and
A Real-Time Fall Detection System in Elderly Care Using Mobile Robot and Kinect Sensor
International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 A Real-Time Fall Detection System in Elderly Care Using Mobile Robot and Kinect Sensor Zaid A. Mundher and Jiaofei
The Big Data methodology in computer vision systems
The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of
Fall Detection System using Accelerometer Principals with Arduino Development Board
ISSN: 2321-7782 (Online) Volume 3, Issue 9, September 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal
Using Received Signal Strength Variation for Surveillance In Residential Areas
Using Received Signal Strength Variation for Surveillance In Residential Areas Sajid Hussain, Richard Peters, and Daniel L. Silver Jodrey School of Computer Science, Acadia University, Wolfville, Canada.
A DECISION TREE BASED PEDOMETER AND ITS IMPLEMENTATION ON THE ANDROID PLATFORM
A DECISION TREE BASED PEDOMETER AND ITS IMPLEMENTATION ON THE ANDROID PLATFORM ABSTRACT Juanying Lin, Leanne Chan and Hong Yan Department of Electronic Engineering, City University of Hong Kong, Hong Kong,
Using Data Mining for Mobile Communication Clustering and Characterization
Using Data Mining for Mobile Communication Clustering and Characterization A. Bascacov *, C. Cernazanu ** and M. Marcu ** * Lasting Software, Timisoara, Romania ** Politehnica University of Timisoara/Computer
Visual-based ID Verification by Signature Tracking
Visual-based ID Verification by Signature Tracking Mario E. Munich and Pietro Perona California Institute of Technology www.vision.caltech.edu/mariomu Outline Biometric ID Visual Signature Acquisition
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Vehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
A smartphone based real-time daily activity monitoring system. Shumei Zhang Paul McCullagh Jing Zhang Tiezhong Yu
A smartphone based real-time daily activity monitoring system Shumei Zhang Paul McCullagh Jing Zhang Tiezhong Yu Outline Contribution Background Methodology Experiments Contribution This paper proposes
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Humans fall detection through visual cues
Technical University of Crete Department of Production Engineering and Management Humans fall detection through visual cues Master Thesis Konstantinos Makantasis Supervisor: Assistant Professor Anastasios
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
Is a Data Scientist the New Quant? Stuart Kozola MathWorks
Is a Data Scientist the New Quant? Stuart Kozola MathWorks 2015 The MathWorks, Inc. 1 Facts or information used usually to calculate, analyze, or plan something Information that is produced or stored by
IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS
IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS Alexander Velizhev 1 (presenter) Roman Shapovalov 2 Konrad Schindler 3 1 Hexagon Technology Center, Heerbrugg, Switzerland 2 Graphics & Media
Real Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
A Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 [email protected] Damian Lyons Fordham University,
Virtual Environments - Basics -
Virtual Environments - Basics - What Is Virtual Reality? A Web-Based Introduction Version 4 Draft 1, September, 1998 Jerry Isdale http://www.isdale.com/jerry/vr/whatisvr.html Virtual Environments allow
Multi-view Intelligent Vehicle Surveillance System
Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland
Crime Hotspots Analysis in South Korea: A User-Oriented Approach
, pp.81-85 http://dx.doi.org/10.14257/astl.2014.52.14 Crime Hotspots Analysis in South Korea: A User-Oriented Approach Aziz Nasridinov 1 and Young-Ho Park 2 * 1 School of Computer Engineering, Dongguk
Detection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
