Fall detection in the elderly by head tracking
|
|
|
- Doreen Watts
- 10 years ago
- Views:
Transcription
1 Loughborough University Institutional Repository Fall detection in the elderly by head tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: YU, M., NAQVI, S.M. and CHAMBERS, J.A., Fall detection in the elderly by head tracking. IN: IEEE Workshop on Statistical Signal Processing (SSP2009), Cardi, Wales, 31 August-3 September, pp Additional Information: This is a conference paper [ c IEEE]. It is also available at: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Metadata Record: Version: Published Publisher: c IEEE Please cite the published version.
2 This item was submitted to Loughborough s Institutional Repository ( by the author and is made available under the following Creative Commons Licence conditions. For the full text of this licence, please go to:
3 FALL DETECTION IN THE ELDERLY BY HEAD TRACKING Miao Yu, Syed Mohsen Naqvi and Jonathon Chambers Advanced Signal Processing Group Electronic and Electrical Engineering Department Loughborough University Loughborough, Leicester, UK {elmy, s.m.r.naqvi, ABSTRACT In the paper, we propose a fall detection method based on head tracking within a smart home environment equipped with video cameras. A motion history image and code-book background subtraction are combined to determine whether large movement occurs within the scene. Based on the magnitude of the movement information, particle filters with different state models are used to track the head. The head tracking procedure is performed in two video streams taken by two separate cameras and three-dimensional head position is calculated based on the tracking results. Finally, the threedimensional horizontal and vertical velocities of the head are used to detect the occurrence of a fall. The success of the method is confirmed on real video sequences. Index Terms motion history image, code-book background subtraction, particle filtering, head tracking, fall detection 1. INTRODUCTION In recent years, the topic of caring for old people has gained increasing public concern. Among the unexpected events which happen in the elderly group, falls are the leading cause of death due to injury and 87 percents of all fractures are caused by fall [1]. Traditional methods to detect falls use some wearable sensors. However, the problem of such detectors is that older people often forget to wear them and they are intrusive. In order to solve this problem, fall detection based on the digital video processing technology is developed. There are many related works in the field of fall detection basing on the digital video processing techniques [2] [3] [4]. By putting a camera in the ceiling, Lee and Mihailidis [2] detect a fall using the shape of the person s silhouette, and Nait- Charif and McKenna [3] detect inactivity outside the normal zones of inactivity such as chairs or sofas. L.Hazelhoff and P.H.With [4] adopted a system with two uncalibrated cameras. A Gaussian multi-frame classifier helps to recognize fall events using the two features of the direction of the main axis of the body and the ratio of the variances in the motion in the x and y directions. In this paper, we propose a fall detection method based on the three-dimensional head velocities in both the horizontal and vertical directions. Unlike the above methods, this method can detect a fall event when it is happening so the alarm signals can be sent immediately. The structure of this paper is as follows: In Section 2, we introduce the concepts of motion history image (MHI) and the code-book background subtraction method,which are then combined to calculate a parameter defined as C motion to determine whether the movement is large or small. In Section 3, a particle filter head tracking method based on gradient and colour information is proposed to track the head in two video streams recorded by the two cameras. The two-dimensional head tracking results are converted to three-d real head positions to obtain the head s horizontal and vertical velocities. Some experimental results are shown in Section 4 and in Section 5, conclusions and future work are given. 2. MOVEMENT DETERMINATION Before head tracking, we must determine whether the movement is large or small so that a proper state model can be used for particle filter head tracking. In this paper, we determine the types of movements by a parameter C motion, which is calculated from the motion history image and the result of background subtraction Motion History Image The MHI, first introduced by Bobick and Davis [5] is an image where the pixel intensity represents the recency of motion in an image sequence. Motion history image shows the tendency of a person s movement by the magnitudes of its pixels so that it is commonly used for activity recognition /09/$25.00 c 2009 IEEE 357
4 The definition of the MHI is given as follow: { τ if D(x,y,t)=1 H τ (x, y, t) = max(0,h τ (x, y, t 1) 1) otherwise (1) where D(x,y,t) represents a pixel at the position (x,y) of a binary image D at the time t with 1 s representing the motion regions. It is extracted from the original image sequence I(x, y, t) using an image-differencing method [5]. The resulting H τ (x, y, t) is a scalar-valued image in which more recently moving pixels are brighter. We can see if large movement occurs, more brighter pixels will be in the area of a person Background Subtraction To remove the unnecessary objects within the video scene, we use the code-book background subtraction method proposed in [6]. Compared with other background subtraction methods, it gives good results on image sequences with shadows, highlights and high image compression. Besides, it can cope efficiently with non-stationary background. Initially, a code-book is trained as the background model. The procedure of code-book training is shown in detail in [6]. After obtaining the code-book, we carry out the background subtraction step for every pixel P(x,y) as follow: Step I.For each pixel P(x,y)=(R,G,B),calculating the intensity from the (R,G,B) value of a colour image by I R 2 + G 2 + B 2 Step II.Find the first codeword c m from the code-book at the position (x,y) matching to P(x,y) based on two conditions: 1.colordist(P(x,y),c m ) ε 2 2.brightness(I, Îm, Ǐm )=true Update the matched codeword Step III.If there is no match, then the pixel P(x,y) is categorized as foreground,otherwise, it is regarded as a background pixel. The colordist(x,c m ) represents the color distortion between x and c m and the brightness(i, Îm, Ǐm ) is true if I is in the range of [Îm, Ǐm] Movement Classification After obtaining the MHI and the background subtraction result, we can determine whether the movement is large or small by calculating the parameter C motion. The formula to calculate C motion is shown as follow [7] pixel(x,y) blob C motion = H τ (x, y, t) (2) pixels blob where blob represents the region of the person extracted using the code-book background subtraction, and H τ (x, y, t) is the MHI. The denominator pixels blob can be obtained by multiplying the number of pixels in the extracted person s region with 255. C motion =0represents no motion and C motion =1represents full motion. We assume that if C motion > 1/2, then large movement occurs. 3. HEAD TRACKING For head tracking, we use the conventional particle filter based on the condensation algorithm. The state model of the particle filter will be chosen differently according to the value of C motion, while the measurement model is based on the gradient and colour information. Particle filtering is an useful tool widely used for tracking, a particle filter consists of two models state model and the measurement model, which can be represented as follow: s(t) =f(s(t 1)) + n(t) z(t) =f(s(t)) + v(t) where s(t) is the state vector at discrete time t and z(t) is the measurement vector. Assume at time t-1, we have N particles {s i t 1 i =1:N} and the corresponding normalized weights {wt 1 i i = 1 : N}, such that N i=1 wi t 1s i t 1 is the minimum mean square estimator (MMSE) of the state s(t-1). In order to obtain the MMSE at time t, the condensation algorithm is applied. The procedure of the condensation algorithm is as follows [8]: For {s i t 1 i =1:N} and {wt 1 i i =1:N} I. Obtain the new samples s i t from the proposal distribution p(s t s i t) for every i. II.Calculate the weight wt i from wt i = wt 1p(z i i t s i t) III.Normalizing the wt i to make N i=1 wi t =1 Finally, we calculate the MMSE ŝ(t) by ŝ(t) = N i=1 wi ts i t In order to avoid the degeneracy problem [8], we re-sample the particles after every iteration to make all the weights of particles identical. In our work, the state variable s(t) has the form [x t,y t,l t ] corresponding to an ellipse, where x t and y t are the coordinates of the ellipse center and l t is the shorter axis (we assume the ratio between the long and short axis is 1.2 for an ellipse representing a head). The state model adopted in this paper is a first-order random walk model with additive Gaussian noise, which has the following form: s(t) =s(t 1) + n(t). The variance of the added noise will be different according to the value of C motion. The measurement model used is based on two cues -the gradient and colour. Assuming the observations of measurement are independent, the following equation holds: p(z i t s i t)= (3) IEEE/SP 15th Workshop on Statistical Signal Processing
5 p(z i t,gradient si t)p(z i t,color si t). The formulas for calculating the p(z i t,gradient si t) and p(z i t,color si t) are: p(z i t,gradient s i t)= ( 1 1 e ψg (s i ) )2 2σg 2 (4) 2πσg and p(z i t,colour s i t)= which are given in detail in [9]. (1 ρ[p 1 e s i,q]) t 2σ c 2 (5) 2πσc 4. EXPERIMENTS AND EVALUATIONS Figures 1 to 4 show the MHI and background subtraction results for four situations (fast walking, slow walking, sitting down and a fall) and the corresponding C motion is calculated. Fig. 4. MHI and background subtraction result for a fall,c motion =61.24% Figure 5 and Figure 6 show some tracking results of the video streams taken by the two cameras. Frame No.1, Camera 1 Frame No.200, Camera 1 Fig. 1. MHI and background subtraction result for the fast walking case, C motion =63.55% Frame No.400, Camera 1 Frame No.600, Camera 1 Fig. 5. Some tracking results for camera 1 Fig. 2. MHI and background subtraction result for the slow walking case,c motion =26.33% Fig. 3. MHI and background subtraction result for the sitting down case,c motion =46.20% Large C motion represents large movement and vice verse. Based on the tracking results obtained from the two video streams, we obtain the head s three-dimensional positions, then we calculate the head s horizontal and vertical velocities. Figure 7 shows the variations of the two velocities versus time for four different activities during 25s. For detecting falling, we set two alert thresholds for the head s horizontal and vertical velocities respectively (0.4m/s and 0.3m/s). If both velocities exceed the alert threshold, then we assume a fall event has occured. During periods from 0s-5s, people walk very fast so the horizontal velocity is very high, however, the vertical velocity is low. From 5s-10s, people walks slowly and both velocities are low. During 10s-20s the person is the sitting down and standing up and large horizontal or vertical velocities may be found but at least one of them will be below the corresponding alert threshold. At time 20s, the fall occurs and both velocities exceed the alert threshold IEEE/SP 15th Workshop on Statistical Signal Processing 359
6 be used to analyze the extracted voice to make a decision on whether a fall may have occurred. 6. REFERENCES Frame No.1, Camera 2 Frame No.200, Camera 2 [1] A.P.Lopez, Multimodal source separation for cognitive assistive technology, Erasmus Exchange Program Report,Loughborough University, [2] T. Lee and A. Mihailidis, An intelligent emergency response system: preliminary development and testing of automated fall detection, Journal of Telemedicine and Telecare, vol. 11, pp , Frame No.400, Camera 2 Frame No.600, Camera 2 Fig. 6. Some tracking results for camera 2 [3] H. Nait-Charif and S. McKenna, Activity summarisation and fall detection in a supportive home environment, In Proceedings of the 17th International Conference on Pattern Recognition (ICPR), vol. 4, pp , [4] L.Hazelhoff, J.Han, and P.H.With, Video-based fall detection in the home using principal component analysis, In Proceedings of the 10th International Conference on Advanced Concepts for Intelligent Vision Systems, vol. 11, pp , [5] A. Bobick and J. Davis, The recognition of human movement using temporal templates, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, pp , [6] K. Kim, T. Chalidabhongse, D. Harwood, and L. Davis, Real-time foreground-background segmentation using code-book model, Real-Time Imaging, vol. 11, pp , June Fig. 7. Variation of velocities for the fast walking, slow walking, sitting down standing up and a fall in 25s 5. CONCLUSION AND FUTURE WORK In this paper, we propose a new fall detection system based on head tracking in two video frames recorded by two cameras. The head s three-dimensional vertical and horizontal velocities are used as criteria for determining a falling event. A particle filter based on gradient and colour information is used for head tracking and its state model is determined by C motion, a parameter which is calculated from the MHI and background subtraction results. A more robust fall detection system can be achieved by the combination of audio and video information, which is well known as multimodal processing [1]. Blind source separation technique can be applied to extract the person s voice information, such as the words such as help, from the noisy environment. And a speech recognition system could then [7] C.Rougier, J.Meunier, A.St-Arnaud, and J.Rousseau, Fall detection from human shape and motion history using video surveillance, In Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, vol. 2, pp , [8] B.Ristic, S.Arulampalam, and N.Gordon, Beyond the Kalman filter: particle filters for tracking applications, In Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, [9] X.Xu and B.Li, Head tracking using particle filter with intensity gradient and colour histogram, In IEEE International Conference on Multimedia and Exploration, pp , IEEE/SP 15th Workshop on Statistical Signal Processing
Vision based approach to human fall detection
Vision based approach to human fall detection Pooja Shukla, Arti Tiwari CSVTU University Chhattisgarh, [email protected] 9754102116 Abstract Day by the count of elderly people living alone at home
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Automated Monitoring System for Fall Detection in the Elderly
Automated Monitoring System for Fall Detection in the Elderly Shadi Khawandi University of Angers Angers, 49000, France [email protected] Bassam Daya Lebanese University Saida, 813, Lebanon Pierre
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient
A ROBUST BACKGROUND REMOVAL ALGORTIHMS
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERING ABSTRACT S.Lakshmi 1 and Dr.V.Sankaranarayanan 2 1 Jeppiaar Engineering College, Chennai [email protected] 2 Director, Crescent
Real Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
Video Surveillance System for Security Applications
Video Surveillance System for Security Applications Vidya A.S. Department of CSE National Institute of Technology Calicut, Kerala, India V. K. Govindan Department of CSE National Institute of Technology
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
Spatio-Temporal Nonparametric Background Modeling and Subtraction
Spatio-Temporal Nonparametric Background Modeling and Subtraction Raviteja Vemulapalli and R. Aravind Department of Electrical engineering Indian Institute of Technology, Madras Background subtraction
SmartMonitor An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation
Sensors 2014, 14, 9922-9948; doi:10.3390/s140609922 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article SmartMonitor An Intelligent Security System for the Protection of Individuals
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Tracking And Object Classification For Automated Surveillance
Tracking And Object Classification For Automated Surveillance Omar Javed and Mubarak Shah Computer Vision ab, University of Central Florida, 4000 Central Florida Blvd, Orlando, Florida 32816, USA {ojaved,shah}@cs.ucf.edu
Automatic Fall Detector based on Sliding Window Principle
Automatic Fall Detector based on Sliding Window Principle J. Rodriguez 1, M. Mercuri 2, P. Karsmakers 3,4, P.J. Soh 2, P. Leroux 3,5, and D. Schreurs 2 1 UPC, Div. EETAC-TSC, Castelldefels, Spain 2 KU
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control
보안공학연구논문지 (Journal of Security Engineering), 제 8권 제 3호 2011년 6월 Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control Ji-Hoon Lim 1), Seoksoo Kim 2) Abstract With
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Canny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
Indoor Surveillance System Using Android Platform
Indoor Surveillance System Using Android Platform 1 Mandar Bhamare, 2 Sushil Dubey, 3 Praharsh Fulzele, 4 Rupali Deshmukh, 5 Dr. Shashi Dugad 1,2,3,4,5 Department of Computer Engineering, Fr. Conceicao
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
Object tracking & Motion detection in video sequences
Introduction Object tracking & Motion detection in video sequences Recomended link: http://cmp.felk.cvut.cz/~hlavac/teachpresen/17compvision3d/41imagemotion.pdf 1 2 DYNAMIC SCENE ANALYSIS The input to
Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm
Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm Choon Kiat Lee 1, Vwen Yen Lee 2 1 Hwa Chong Institution, Singapore [email protected] 2 Institute
ESE498. Intruder Detection System
0 Washington University in St. Louis School of Engineering and Applied Science Electrical and Systems Engineering Department ESE498 Intruder Detection System By Allen Chiang, Jonathan Chu, Siwei Su Supervisor
How To Filter Spam Image From A Picture By Color Or Color
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras
Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras Chenyang Zhang 1, Yingli Tian 1, and Elizabeth Capezuti 2 1 Media Lab, The City University of New York (CUNY), City College New
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
Activity recognition in ADL settings. Ben Kröse [email protected]
Activity recognition in ADL settings Ben Kröse [email protected] Content Why sensor monitoring for health and wellbeing? Activity monitoring from simple sensors Cameras Co-design and privacy issues Necessity
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
An Application of Data Leakage Prevention System based on Biometrics Signals Recognition Technology
Vol.63 (NT 2014), pp.1-5 http://dx.doi.org/10.14257/astl.2014.63.01 An Application of Data Leakage Prevention System based on Biometrics Signals Recognition Technology Hojae Lee 1, Junkwon Jung 1, Taeyoung
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
Vehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT Jeffrey B. Flora, Mahbubul Alam, Amr H. Yousef, and Khan M. Iftekharuddin December 2013 DISCLAIMER The contents of this
Detecting and positioning overtaking vehicles using 1D optical flow
Detecting and positioning overtaking vehicles using 1D optical flow Daniel Hultqvist 1, Jacob Roll 1, Fredrik Svensson 1, Johan Dahlin 2, and Thomas B. Schön 3 Abstract We are concerned with the problem
Tracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Demo: Real-time Tracking of Round Object
Page 1 of 1 Demo: Real-time Tracking of Round Object by: Brianna Bikker and David Price, TAMU Course Instructor: Professor Deepa Kundur Introduction Our project is intended to track the motion of a round
Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking
Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle
A secure face tracking system
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
Real time vehicle detection and tracking on multiple lanes
Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms
A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms H. Kandil and A. Atwan Information Technology Department, Faculty of Computer and Information Sciences, Mansoura University,El-Gomhoria
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition
Send Orders for Reprints to [email protected] The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary
Real-time Traffic Congestion Detection Based on Video Analysis
Journal of Information & Computational Science 9: 10 (2012) 2907 2914 Available at http://www.joics.com Real-time Traffic Congestion Detection Based on Video Analysis Shan Hu a,, Jiansheng Wu a, Ling Xu
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization
Journal of Computer Science 6 (9): 1008-1013, 2010 ISSN 1549-3636 2010 Science Publications Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control
Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control D.Jayanthi, M.Bommy Abstract In modern days, a large no of automobile accidents are caused due to driver fatigue. To
Detection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
Human behavior analysis from videos using optical flow
L a b o r a t o i r e I n f o r m a t i q u e F o n d a m e n t a l e d e L i l l e Human behavior analysis from videos using optical flow Yassine Benabbas Directeur de thèse : Chabane Djeraba Multitel
A Method of Caption Detection in News Video
3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.
Distributed Vision-Based Reasoning for Smart Home Care
Distributed Vision-Based Reasoning for Smart Home Care Arezou Keshavarz Stanford, CA 9435 [email protected] Ali Maleki Tabar Stanford, CA 9435 [email protected] Hamid Aghajan Stanford, CA 9435 [email protected]
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
SIGNATURE VERIFICATION
SIGNATURE VERIFICATION Dr. H.B.Kekre, Dr. Dhirendra Mishra, Ms. Shilpa Buddhadev, Ms. Bhagyashree Mall, Mr. Gaurav Jangid, Ms. Nikita Lakhotia Computer engineering Department, MPSTME, NMIMS University
Distributed Vision Processing in Smart Camera Networks
Distributed Vision Processing in Smart Camera Networks CVPR-07 Hamid Aghajan, Stanford University, USA François Berry, Univ. Blaise Pascal, France Horst Bischof, TU Graz, Austria Richard Kleihorst, NXP
Using Received Signal Strength Variation for Surveillance In Residential Areas
Using Received Signal Strength Variation for Surveillance In Residential Areas Sajid Hussain, Richard Peters, and Daniel L. Silver Jodrey School of Computer Science, Acadia University, Wolfville, Canada.
Algorithm for License Plate Localization and Recognition for Tanzania Car Plate Numbers
Algorithm for License Plate Localization and Recognition for Tanzania Car Plate Numbers Isack Bulugu Department of Electronics Engineering, Tianjin University of Technology and Education, Tianjin, P.R.
FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,
Performance Verification of Super-Resolution Image Reconstruction
Performance Verification of Super-Resolution Image Reconstruction Masaki Sugie Department of Information Science, Kogakuin University Tokyo, Japan Email: [email protected] Seiichi Gohshi Department
Computer Vision. Image math. Copyright 2001 2016 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved
Computer Vision: Image math and geometric Computer Vision Image math Copyright 2001 2016 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved [email protected], [email protected]
Fall Alarm and Inactivity Detection System Design and Implementation on Raspberry Pi
Fall Alarm and Inactivity Detection System Design and Implementation on Raspberry Pi Qifan Dong*, Yang Yang*, Wang Hongjun*, Xu Jian-Hua** * School of Information Science and Engineering, Shandong University,
High Quality Image Magnification using Cross-Scale Self-Similarity
High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea [email protected],
Traffic Flow Monitoring in Crowded Cities
Traffic Flow Monitoring in Crowded Cities John A. Quinn and Rose Nakibuule Faculty of Computing & I.T. Makerere University P.O. Box 7062, Kampala, Uganda {jquinn,rnakibuule}@cit.mak.ac.ug Abstract Traffic
The use of computer vision technologies to augment human monitoring of secure computing facilities
The use of computer vision technologies to augment human monitoring of secure computing facilities Marius Potgieter School of Information and Communication Technology Nelson Mandela Metropolitan University
An Approach for Utility Pole Recognition in Real Conditions
6th Pacific-Rim Symposium on Image and Video Technology 1st PSIVT Workshop on Quality Assessment and Control by Image and Video Analysis An Approach for Utility Pole Recognition in Real Conditions Barranco
RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation
RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation Creating intelligent solutions with Video Analytics (VCA- Video Content Analysis) Intelligent IP video surveillance is one
