A Real-Time Fall Detection System in Elderly Care Using Mobile Robot and Kinect Sensor
|
|
|
- Briana Watts
- 9 years ago
- Views:
Transcription
1 International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 A Real-Time Fall Detection System in Elderly Care Using Mobile Robot and Kinect Sensor Zaid A. Mundher and Jiaofei Zhong Abstract The growing population of elderly people especially in developed countries motivates the researchers to develop healthcare systems to ensure the safety of elderly people at home. On the other hand, mobile robots can provide an efficient solution to healthcare problem. Moreover, using new technologies such as the Kinect sensor with robotics could bring new ways to build intelligent systems that could use to monitor the elderly people, and raise an alarm in case of dangerous events, such as falling down, are detected. Falls and their consequences are a major risk especially for elderly people who live alone where immediate assistance is needed. In this work, the Kinect sensor is used to introduce a mobile robot system to follow a person and detect when the target person has fallen. In addition, the mobile robot is provided with a cell phone that is used to send an SMS message notification and make an emergency call when a fall is detected. II. LITERATURE REVIEW Most of the previous works related to vision-based method used fixed regular RGB cameras as the main sensor [3], []. However, in recent years, researchers started using depth cameras, such as the Kinect sensor, to detect falls. Compared to classical methods, which use regular RGB cameras, the depth cameras improve performance with lower costs and more functionality. Moreover, changing lighting conditions does not affect the result of the depth camera. The use of the Kinect sensor as a fixed camera has been proposed in the literature [5]-[7]. However, the use of fixed Kinect sensors can be unreliable because of occlusions by furniture items and limitations of the Kinect viewpoint. In addition, the field of view for the Kinect is limited, so the Kinect sensor cannot cover the entire surveillance area. Therefore, in order to overcome the previous drawbacks, a mobile robot is used in this work. The proposed fall detection system is combined with a proposed person following system, so the robot is able to track and follow elderly people in order to monitor their activities. As a result, the coverage limitation of using a fixed Kinect sensor to detect falls is solved in this work. Index Terms Fall detection, healthcare, Kinect sensor, person-following. I. INTRODUCTION Falls and their consequences, such as bone fractures, are a major risk especially for elderly people who live alone where immediate assistance is needed. According to [1], falls are the sixth most common cause of death for people over the age of 65. Moreover, it is the second most common cause of death for people between the age of 65 and 75, and the most common cause for people over 75. Therefore, automatically detecting falls at home has become a major interest in research. Fall detection systems are used to monitor the place where elderly people live and send a notification to an emergency center or caregivers once a fall is detected. Many approaches and algorithms have been developed to detect falls. According to [1], fall detection approaches are divided into three main categories: vision-based, environmental (ambient devices), and wearable. Since vision-based systems are able to overcome the limitations of other sensor types [2], a vision-based approach which utilizes the Kinect depth camera is used in this work. The main goal of this work is to accomplish a reliable fall detection system using a low-cost mobile robot platform. III. HUMAN-ROBOT INTERACTION Gesture recognition and speech recognition are the most important research areas in the field of the Human-Robot-Interaction. Since the proposed system is designed to work with elderly people who prefer a simple and natural way to interact with robots, it was important to create a natural user interface to interact with the robot. Therefore, in this work, the Kinect sensor is used to develop and implement a gesture recognition system and a speech recognition system to provide a simple and natural interaction method through which elderly people can send commands to the robot without the need for physical contact with the robot. IV. THE PROPOSED SYSTEM The main components of the proposed system are shown in Fig. 1. A. Hardware Layer Manuscript received November 21, 2013; revised February 12, 201. This work was financially supported by the Higher Committee for Education Development in Iraq (HCED). Z. A. Mundher is with the Department of Computer Science, University of Mosul, Mosul, Iraq ( [email protected]). J. Zhong is with the Department of Mathematics and Computer Science, University of Central Missouri Warrensburg, MO 6093, USA ( [email protected]). DOI: /IJMMM.201.V ) The Kinect sensor The Microsoft Kinect, which was released in November 2010, is a set of sensors that was originally developed as an input device for the Xbox 360 gaming console (Fig. 2). The Kinect can track the human body movements, so the player 133
2 International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 move from the right side to the left side or vice versa, while the Y axis will change when the joints move in the upwards or downwards direction. Similarly, Z axis will change when the joints move forwards or backwards in relation to the Kinect sensor. The twenty joint points are shown in Fig. 3. can interact with the game without using a controller; the body is the controller. B. Skills Layer 1) Person detection To detect and track a person, the skeleton stream from the Kinect is used. An event is registered to listen and track the skeleton frame. Once a person is detected, the proposed algorithm starts processing the skeletal information. 2) Fall detection The proposed method to detect a fall is based on the skeleton joint positions relative to the ground. Since the proposed fall detection algorithm calculates the distance between the body joint points and the floor-plane, the floor-plane must be detected first. The Kinect for Windows SDK provides a floor-clipping-plane vector, which contains the coefficients of an estimated floor-plane equation. To calculate the distance between a point and the plane, when the point is (x0, y0, z0) and the equation of the plane is (ax + by+ cz + d), Eq. (1) is used, where D is the distance between the point and the plane. Fig. 1. Main components of the proposed system architecture. D (ax0 by0 cz 0 d ) / a 2 b 2 c 2 Fig. 2. The Kinect device. (1) Using this relation, the distance from the floor to each body joint point can be calculated. A fall is detected by thresholding the distance between the joint points that are shown in Fig. and Table I and the ground. Fig. 3. Skeleton joint points. Fig.. Fall detection joint points. In June 2011, Microsoft released a software development kit (SDK) for the Kinect. The Kinect for Windows SDK is a toolkit that has a set of libraries to develop different applications for Kinect devices. Besides providing depth images, another feature of the Kinect sensor is a skeletal tracker. The Kinect for Windows SDK provides a skeleton-tracking feature that allows developers to recognize people and track their actions. Using depth sensors, the Kinect can recognize up to six users who are standing between 0.8 to.0 meters away (2.6 to 13.1 feet). Two of the detected users can be tracked in detail with twenty joint positions. Each skeleton joint is measured in a three dimensional (X, Y, Z) plane. The X axis of the joint will change when the joints TABLE I: FALL DETECTION JOINT POINTS Number #1 #2 #3 # #5 Joint Point Head ShoulderCenter HipCenter AnkleRight AnkleLeft The procedure of the fall detection system based on the floor-plane is shown in Algorithm 1. When the floor is not visible or detectable, the proposed algorithm depends on the skeleton space coordinate system to detect falls. As shown in Fig. 5, the origin of the skeleton space coordinate is placed at the Kinect sensor, and it is a 13
3 International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 based on comparing the joints positions and the deviation between the joints positions. In this work, two types of gesture movements are recognized, as shown in Table II. right-handed coordinate system, which means y-axis extends upwards. According to the proposed method that is shown in Algorithm 2, if the Y coordinate of the points that are listed in Table I is less than a given threshold, a fall is detected. Algorithm 1: Fall Detection Method1 TABLE II: GESTURE PATTERNS Gesture Pattern Right-hand is above the right shoulder and the left hand is below the left hip Action Start following function PersonFallingDown1( JointPoint ) if floor_plane is detected declare float A FllorClipPlane.Item1 declare float B FllorClipPlane.Item2 declare float C FllorClipPlane.Item3 declare float D FllorClipPlane.Item Stop following Left-hand is above the left shoulder and the right-hand is below the right hip The StartFollowing gesture is identified when the right hand is raised above the right shoulder and the left hand is below the left hip, as shown in Fig. 6. To recognize this gesture type, the right shoulder joint position is defined as the reference point, while the right hand joint position is defined as the target point. declare float x A * JointPoint.X declare float y B* JointPoint.Y declare float z C * JointPoint.Z declare float distance (x+y+z+d)/ sqrt((a*a)+(b*b)+(c*c)) if ( distance <= 0.3) return Fall detected else return Safe Fig. 6. The StartFollowing gesture. Fig. 7. The StopFollowing gesture. In contrast, the StopFollowing gesture is identified when the left hand is raised above the left shoulder while the right hand is below the right hip as shown in Fig. 7. To recognize this gesture type, the left shoulder joint position is defined as the reference point, while the left hand joint position is defined as the target point. The algorithm of the gesture recognition is shown in Algorithm 3. Fig. 5. Skeleton space coordinate system. Algorithm 2: Fall Detection Method2 Algorithm 3: Gesture Recognition Procedure function PersonFallingDown( skeleton) declare double yhead, yshouldercenter, yspine, yhipcenter, yankleleft, yankleright, Threshold declare double Head_Ankle1 =0, Head_Ankle2 =0, thr declare Boolean flag = true yhead Y coordinate of the skeleton.head point yshouldercenter Y coordinate of the skeleton.shouldercenter point yhipcenter Y coordinate of the skeleton.hipcenter point yankleleft Y coordinate of the skeleton.ankleleft point yankleright Y coordinate of the skeleton.ankleright point function (skeleton, threshold) if (Abs(skeleton.HandRight.X skeleton.elbowright.x)) < threshold) and (Abs(skeleton.HandRight.Z skeleton.elbowright.z) < threshold) and (Abs(skeleton.ElbowRight.Z - skeleton.shoulderright.z) < threshold) and (Abs(skeleton.HandRight.Y skeleton.shoulderright.y) < threshold) and (Abs(skeleton.HandRight.Z - skeleton.shoulderright.z) < threshold) and (skeleton.handleft.y < skeleton.hipleft.y)) then if Head_Ankle1 ==0 Head_Ankle1 = yhead yankleright thr = Head_Ankle1 * 0.7 else Head_Ankle2 = yhead-yankleright if (abs(head_ankle1- Head_Ankle2) > thr ) and if (yhead < Threshold) and if ((yshouldercenter < Threshold) and cmd Start if (Abs(skeleton.HandLeft.X skeleton.elbowleft.x)) < threshold) and (Abs(skeleton.HandLeft.Z skeleton.elbowleft.z) < threshold) and (Abs(skeleton.ElbowLeft.Z - skeleton.shoulderleft.z) < threshold) and (Abs(skeleton.HandLeft.Y skeleton.shoulderleft.y) < threshold) and (Abs(skeleton.HandLeft.Z - skeleton.shoulderleft.z) < threshold) and (skeleton.handright.y < skeleton.hipright.y)) then if (yhipcenter < Threshold) and if (yankleleft < Threshold) and if (yankleright < Threshold)) then return Fall is detected else return Safe cmd Stop return cmd 3) Gesture recognition The proposed gesture-recognition engine is developed 135
4 International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 ) Speech recognition In this part, the audio stream of the Kinect for Windows SDK is used to capture the Kinect audio data. Moreover, Microsoft Speech Library is used to build the speech recognition engine. The introduced speech recognition engine is able to recognize three commands that are listed in Table III. Command Run Stop Call D. Robot-Motion Control Layer To interact with the robot s motors, the RobotControl class was built based on the MindSqualls library, which is free and it can be downloaded from The RobotControl class provides six functions to control the robot as shown in Fig. 9 and Table IV. TABLE III: VOICE COMMANDS Description To start the fall monitoring system To stop the fall monitoring system To make a call C. Navigation Layer 1) Person following The distance-based control loop approach [8] is used to develop the proposed person-following subsystem. According to the proposed algorithm, the robot moves toward the target person when the distance is greater than 2m. Besides moving towards the target person, the robot needs to change its direction when the target person steps to the right side or left side. Therefore, the robot calculates the right and left motion parameters in order to keep the target person in the center of the robot s view. The proposed algorithm keeps tracking the right and left shoulders in order to determine the direction of the target person. Based on the shoulders values, the robot will determine if a turn is necessary. As shown in Fig. 8(A), if the right and left shoulders are at the same distance from the robot (given a threshold), the forward command is executed. In addition, if the right shoulder is closer to the robot than the left shoulder, as shown in Fig. 8(B), the robot executes a rotate left command. If the left shoulder is closer to the robot from the right shoulder, as shown in Fig. 8(C), the robot executes a rotate right command. (A): Forward (B): Turn Left Fig. 9. The RobotControl class. TABLE IV: THE ROBOTCONTROL CLASS METHODS Method Purpose Connect2Robot Initialize the connection to the robot MoveForward Drive the robot forward MoveBackward Drive the robot backward TurnRight Turn the robot to the right TurnLeft Turn the robot to the left Stop Stop the robot E. Mobile Control Layer To implement the fall detection alarm system, the robot is provided with a mobile phone device to send an SMS message notification and to make an emergency call when the robot detects a fall. To reduce false positive alarms, after a fall is detected, the system waits 5 seconds to confirm that the fall is followed by an inactivity period, which means that the fallen person needs help with getting up again. Once the five seconds end and the user is still lying on the ground, an SMS message notification is sent to a caregiver phone number. Moreover, the robot, based on the skeletal data, will move toward the head of the fallen person to be close enough to enable the fallen person to use the voice command to make a call. AT commands are used to control the mobile phone device. More specifically, AT+CMGS command is used to send the SMS, while ATD command is used to make a call. V. IMPLEMENTATION (C): Turn Right Fig. 8. Person-following commands. The LEGO Mindstorm robot and the Kinect sensor are used to implement the proposed system. To implement the Kinect processing, the Kinect Windows SDK with C# is adopted. Moreover, a computer laptop is used as the processor (robot computer), and the Nokia 6200 mobile phone is used to raise an alarm (send an SMS or make a call). The algorithm of the introduced person-following method is given is Algorithm. Algorithm : Person Following Procedure function PersonFollowing (distance, Threshold, rightshoulderposition, leftshoulderposition) if (distance > 2m) then if ( abs (rightshoulder - leftshoulder) < Threshold) then MoveForward elseif (leftshoulder > rightshoulder) then TurnLeft else TurnRight else StopTheRobot Fig. 10. Main user interface. 136
5 International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 In addition, the results have shown that the proposed gesture recognition system works perfectly when the distance is between 2 and 3.5m as shown in Fig. 13. Windows Presentation Foundation (WPF) is used to design the main user interface as shown in Fig. 10. VI. EXPERIMENTS AND RESULTS Experiments were conducted in a real indoor environment under different light conditions. First, the system was successfully tested to verify that there is no false alarm will be raised while the target person is sitting on a chair or lying on a bed, in contrast, a fall is detected once the target person is lying on the ground, as shown in Table V. TABLE V: DIFFERENT SITUATION OF THE TARGET PERSON Target Statue System Status Fig. 13. Gesture recognition accuracy. Moreover, the results have also shown that the speech recognition system works well. On the other hand, the person-following results have shown that the robot is able to move directly toward the target when the distance between the robot and the target person is larger than 2m. Meanwhile, the robot adjusts itself to keep the target in the middle of the robot s field of view. The snapshots of the indoor person-following experiment are shown in Fig. 1. Safe Safe Fall is detected Second, the fall detection system was tested on different fall scenarios (fall to the right side, fall to the left side, fall to the front, and fall to the back). Fall-detection scenarios are shown in Fig. 11, and the performance of the fall-detection system is shown in Table VI and Fig. 12. Fig. 1. Sequence of frames of following the target. VII. CONCLUSION Today, the Kinect device is one of the major input devices that are used in the robotics field. The use of the Kinect device in conjunction with robotics provides several opportunities and promising possibilities. In this work, the Kinect device is used to accomplish a fall detected system with mobile robot. To achieve real-time detection and tracking of a user, the skeletal tracking feature of the Kinect for Windows SDK is used. The results revealed the use of the Kinect device to implement the proposed fall-detection system is efficient with a low computational complexity. Fig. 11. Different falls scenarios. Distance 2.0m 2.5m 3.0m 3.5m TABLE VI: FALL DETECTION RESULTS No. of No. of No. of simulated detected undetected falls falls falls scenarios scenarios scenarios Accuracy REFERENCES 50% 75% 100% 100% [1] [2] [3] [] Fig. 12. Fall-detection accuracy. 137 S. Abbate, M. Avvenuti, P. Corsini, A. Vecchio, and J. Light, Monitoring of human movements for fall detection and activities recognition in elderly care using wireless sensor network: a survey, in Wireless sensor networks: application-centric design, Y. K. Tan, Ed. China: InTech, 2010, pp A. Mihailidis, B. Carmichael, and J. Boger, The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home, IEEE Transactions on Information Technology in Biomedicine, vol. 8, issue 3, pp , Sept Y. Huang, S. Miaou, and T. Liao, A human fall detection system using an omni-directional camera in practical environments for health care applications, presented at the MVA2009 IAPR Conference on Machine Vision Applications, Yokohama, Japan, May 20-22, V. Rudakova, Probabilistic framework for multi-target tracking using multi-camera: applied to fall detection, M.S. thesis, Color in Informatics and Media Technology, Gjovik university college, Norway, 2010.
6 International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, May 201 [5] C. Kawatsu, J. Li, and C. Chung, Development of a fall detection system with Microsoft Kinect, in Robot Intelligence Technology and Applications 2012, vol. 208, J. Kim, E. T. Matson, H. Myung, and P. Xu, Eds. Springer Berlin Heidelberg, 2012, pp [6] C. Rougier, E. Auvinet, J. Rousseau, M. Mignotte, and J. Meunier, Fall detection from depth map video sequences, in Toward Useful Services for Elderly and People with Disabilities, vol. 6719, B. Abdulrazak et al. Eds. Montreal, Canada: Springer Berlin Heidelberg, 2011, pp [7] R. Planinc and M. Kampel, Detecting falls by using depth cameras, presented at the 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, [8] E. A. Topp and H. I. Christensen, Tracking for following and passing persons, presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems, Alberta, Canada, Aug. 2-6, Jiaofei Zhong is currently an assistant professor of computer science at University of Central Missouri. She received her PhD in computer science in 2012 and M.S. degree in 2010, both from the University of Texas at Dallas. Dr. Zhong has served as a peer reviewer for a number of international conferences and journals, and has been the publicity chair, financial chair, and OCS co-chair in the organizing committees of several international conferences. Her research interests are in the areas of data engineering and information management, especially in wireless communication environment, including data broadcasting, vehicle ad hoc networks, and sensor database. Zaid A. Mundher is currently a master s student in the Department of Mathematics and Computer Science, University of Central Missouri. He received his B.S. in computer science from the University of Mosul, Iraq in He was a teaching assistant at the Department of Computer Science, University of Mosul, between 2008 and
Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm
Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm Choon Kiat Lee 1, Vwen Yen Lee 2 1 Hwa Chong Institution, Singapore [email protected] 2 Institute
Development of a Service Robot System for a Remote Child Monitoring Platform
, pp.153-162 http://dx.doi.org/10.14257/ijsh.2014.8.5.14 Development of a Service Robot System for a Remote Child Monitoring Platform Taewoo Han 1 and Yong-Ho Seo 2, * 1 Department of Game and Multimedia,
THE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT
THE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT A. Procházka 1,O.Vyšata 1,2,M.Vališ 1,2, M. Yadollahi 1 1 Institute of Chemical Technology, Department of Computing and Control
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Ammar Ahmad Awan, Muhammad Aamir Saleem, Sungyoung Lee
Ofisina : Kinect based Virtual Office Assistant Ammar Ahmad Awan, Muhammad Aamir Saleem, Sungyoung Lee Dept. of Computer Engineering, Kyung Hee University, Yongin, South Korea {ammar, aamir, sylee}@oslab.khu.ac.kr
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
Gesture-Based Human-Computer-Interaction Using Kinect for Windows Mouse Control and PowerPoint Presentation
Gesture-Based Human-Computer-Interaction Using Kinect for Windows Mouse Control and PowerPoint Presentation Toyin Osunkoya 1, and Johng-Chern Chern 2 Department of Mathematics and Computer Science Chicago
Recognition of Day Night Activity Using Accelerometer Principals with Aurdino Development Board
Recognition of Day Night Activity Using Accelerometer Principals with Aurdino Development Board Swapna Shivaji Arote R.S. Bhosale S.E. Pawar Dept. of Information Technology, Dept. of Information Technology,
Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras
Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras Chenyang Zhang 1, Yingli Tian 1, and Elizabeth Capezuti 2 1 Media Lab, The City University of New York (CUNY), City College New
An Intelligent Multi- Sensor Surveillance System for Elderly Care
Smart Computing Review, vol. 2, no. 4, August 2012 296 Smart Computing Review An Intelligent Multi- Sensor Surveillance System for Elderly Care Sou-Young Jin 1, Young-Seob Jeong 1, Chankyu Park 2, KyoJoong
Next Generation Natural User Interface with Kinect. Ben Lower Developer Community Manager Microsoft Corporation
Next Generation Natural User Interface with Kinect Ben Lower Developer Community Manager Microsoft Corporation Key Takeaways Kinect has evolved: Whether you did it -> How you did it One or two people ->
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
VIRTUAL TRIAL ROOM USING AUGMENTED REALITY
VIRTUAL TRIAL ROOM USING AUGMENTED REALITY Shreya Kamani, Neel Vasa, Kriti Srivastava, D. J. Sanghvi College of Engineering, Mumbai 53 Abstract This paper presents a Virtual Trial Room application using
Kinect Interface to Play Computer Games with Movement
Kinect Interface to Play Computer Games with Movement Program Install and Hardware Setup Needed hardware and software to use the Kinect to play computer games. Hardware: Computer running Windows 7 or 8
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
Amit Moran, Gila Kamhi, Artyom Popov, Raphaël Groscot Perceptual Computing - Advanced Technologies Israel
Introducing Intel RealSense Robotics Innovation Program ROS Integration Amit Moran, Gila Kamhi, Artyom Popov, Raphaël Groscot Perceptual Computing - Advanced Technologies Israel Agenda Intel RealSense
Fall Detection System using Accelerometer Principals with Arduino Development Board
ISSN: 2321-7782 (Online) Volume 3, Issue 9, September 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
Master Thesis Using MS Kinect Device for Natural User Interface
University of West Bohemia Faculty of Applied Sciences Department of Computer Science and Engineering Master Thesis Using MS Kinect Device for Natural User Interface Pilsen, 2013 Petr Altman Declaration
Fall detection in the elderly by head tracking
Loughborough University Institutional Repository Fall detection in the elderly by head tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:
White paper. Axis Video Analytics. Enhancing video surveillance efficiency
White paper Axis Video Analytics Enhancing video surveillance efficiency Table of contents 1. What is video analytics? 3 2. Why use video analytics? 3 2.1 Efficient use of manpower 3 2.2 Reduced network
Advanced Methods for Pedestrian and Bicyclist Sensing
Advanced Methods for Pedestrian and Bicyclist Sensing Yinhai Wang PacTrans STAR Lab University of Washington Email: [email protected] Tel: 1-206-616-2696 For Exchange with University of Nevada Reno Sept. 25,
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Indoor Surveillance Security Robot with a Self-Propelled Patrolling Vehicle
C Indoor Surveillance Security Robot with a Self-Propelled Patrolling Vehicle Hou-Tsan Lee, Wei-Chuan Lin, Jing-Siang Huang Department of Information Technology, TakMing University of Science and Technology
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
How does the Kinect work? John MacCormick
How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured
Frequently Asked Questions
Frequently Asked Questions Basic Facts What does the name ASIMO stand for? ASIMO stands for Advanced Step in Innovative Mobility. Who created ASIMO? ASIMO was developed by Honda Motor Co., Ltd., a world
Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization
Journal of Computer Science 6 (9): 1008-1013, 2010 ISSN 1549-3636 2010 Science Publications Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization
THE proportion of the elderly population is rising rapidly
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS Fall Detection Based on Body Part Tracking Using a Depth Camera Zhen-Peng Bian, Student Member, IEEE, Junhui Hou, Student Member, IEEE, Lap-Pui Chau, Senior
Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors
Tamkang Journal of Science and Engineering, Vol. 12, No. 3, pp. 249 258 (2009) 249 Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors Ching-Chang Wong 1 *, Chi-Tai Cheng 1, Kai-Hsiang
Original Research Articles
Original Research Articles Researchers Mr.Ramchandra K. Gurav, Prof. Mahesh S. Kumbhar Department of Electronics & Telecommunication, Rajarambapu Institute of Technology, Sakharale, M.S., INDIA Email-
A Healthcare/Eldercare Robot based on Skilligent Technology
A Healthcare/Eldercare Robot based on Skilligent Technology Application Proposal Revision 7 Copyright 2008 Skilligent Page 1 of 8 http://www.skilligent.com TABLE OF CONTENTS 1 INTRODUCTION 3 2 CONCEPTION
Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control
보안공학연구논문지 (Journal of Security Engineering), 제 8권 제 3호 2011년 6월 Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control Ji-Hoon Lim 1), Seoksoo Kim 2) Abstract With
International Journal of Advancements in Research & Technology, Volume 2, Issue 5, M ay-2013 331 ISSN 2278-7763
International Journal of Advancements in Research & Technology, Volume 2, Issue 5, M ay-2013 331 An Integrated Real-Time Vision Based Home Security System Qusay Idrees Sarhan Department of Computer Sciences,
Frequently Asked Questions
Frequently Asked Questions Basic Facts What does the name ASIMO stand for? ASIMO stands for Advanced Step in Innovative Mobility. Who created ASIMO? ASIMO was developed by Honda Motor Co., Ltd., a world
Indoor Surveillance System Using Android Platform
Indoor Surveillance System Using Android Platform 1 Mandar Bhamare, 2 Sushil Dubey, 3 Praharsh Fulzele, 4 Rupali Deshmukh, 5 Dr. Shashi Dugad 1,2,3,4,5 Department of Computer Engineering, Fr. Conceicao
Report of Research Results
Report of Research Results Scaling and Deployment of e-guardian to Eldercare Centers and Single Elderly Homes Primary Researcher: Prof. Tan Kok Kiong, Department of Electrical and Computer Engineering
Remote Monitoring and Controlling System Based on ZigBee Networks
Remote Monitoring and Controlling System Based on ZigBee Networks Soyoung Hwang and Donghui Yu* Department of Multimedia Engineering, Catholic University of Pusan, South Korea {soyoung, dhyu}@cup.ac.kr
Unit 1: INTRODUCTION TO ADVANCED ROBOTIC DESIGN & ENGINEERING
Unit 1: INTRODUCTION TO ADVANCED ROBOTIC DESIGN & ENGINEERING Technological Literacy Review of Robotics I Topics and understand and be able to implement the "design 8.1, 8.2 Technology Through the Ages
GestPoint Maestro3D. A White Paper from GestureTek The Inventor of 3D Video Gesture Control
A White Paper from GestureTek The Inventor of 3D Video Gesture Control Table of Contents Executive Summary... 3 The Trackers... 4 GestPoint Maestro3D Hand Tracker... 4 GestPoint Maestro3D Multi-Tracker...
Removing Moving Objects from Point Cloud Scenes
1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky [email protected] Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database Seungsu Kim, ChangHwan Kim and Jong Hyeon Park School of Mechanical Engineering Hanyang University, Seoul, 133-791, Korea.
A Study on Intelligent Video Security Surveillance System with Active Tracking Technology in Multiple Objects Environment
Vol. 6, No., April, 01 A Stud on Intelligent Video Securit Surveillance Sstem with Active Tracking Technolog in Multiple Objects Environment Juhun Park 1, Jeonghun Choi 1, 1, Moungheum Park, Sukwon Hong
Design and Implementation of an Accidental Fall Detection System for Elderly
Design and Implementation of an Accidental Fall Detection System for Elderly Enku Yosef Kefyalew 1, Abubakr Rahmtalla Abdalla Mohamed 2 Department of Electronic Engineering, Tianjin University of Technology
3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM
3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM Dr. Trikal Shivshankar 1, Patil Chinmay 2, Patokar Pradeep 3 Professor, Mechanical Engineering Department, SSGM Engineering
A secure face tracking system
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking
WAITER: A Wearable Personal Healthcare and Emergency Aid System
Sixth Annual IEEE International Conference on Pervasive Computing and Communications WAITER: A Wearable Personal Healthcare and Emergency Aid System Wanhong Wu 1, Jiannong Cao 1, Yuan Zheng 1, Yong-Ping
Implementation of a Wireless Gesture Controlled Robotic Arm
Implementation of a Wireless Gesture Controlled Robotic Arm Saurabh A. Khajone 1, Dr. S. W. Mohod 2, V.M. Harne 3 ME, Dept. of EXTC., PRMIT&R Badnera, Sant Gadge Baba University, Amravati (M.S.), India
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
Android based Secured Vehicle Key Finder System
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Android based Secured Vehicle Key Finder System Sindhoori S. 1, Dr. N. Sathish Kumar 2 *(M.E. Embedded System Technologies, Sri
White paper. Axis Video Analytics. Enhancing video surveillance efficiency
White paper Axis Video Analytics Enhancing video surveillance efficiency Table of contents 1. What is video analytics? 3 2. Why use video analytics? 3 2.1 Efficient use of manpower 3 2.2 Reduced network
Analytics. CathexisVision. CathexisVision Features. Video Management Solutions
Analytics Features Video Management Solutions The Video Analytics Suite provides a powerful enhancement to the world-class, feature-rich Video Surveillance Management software, and significantly enriches
ANDROID LEVERED DATA MONITORING ROBOT
ANDROID LEVERED DATA MONITORING ROBOT 1 HIMANI PATHAK, 2 VIDYALAKSHMI KRISHNAKUMAR, 3 SHILPA RAVIKUMAR, 4 AJINKYA SHINDE 1,2,3,4 Electronics & Telecommunication Engineering, Fr. C. R. Institute of Technology,
Vision based approach to human fall detection
Vision based approach to human fall detection Pooja Shukla, Arti Tiwari CSVTU University Chhattisgarh, [email protected] 9754102116 Abstract Day by the count of elderly people living alone at home
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
Human Motion Tracking for Assisting Balance Training and Control of a Humanoid Robot
University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School January 2012 Human Motion Tracking for Assisting Balance Training and Control of a Humanoid Robot Ahmad Adli
Computational Modeling and Simulation for Learning an Automation Concept in Programming Course
Computational Modeling and Simulation for Learning an Automation Concept in Programming Course Yong Cheon Kim, Dai Young Kwon, and Won Gyu Lee Abstract Computational thinking is a fundamental skill for
Design and Implementation of a Wireless Gesture Controlled Robotic Arm with Vision
Design and Implementation of a Wireless Gesture Controlled Robotic Arm with Vision Love Aggarwal Varnika Gaur Puneet Verma B.Tech (ECE), GGSIPU B.Tech (ECE), GGSIPU B.Tech (ECE), GGSIPU ABSTRACT In today
Human Behavior Analysis in Intelligent Retail Environments
Human Behavior Analysis in Intelligent Retail Environments Andrea Ascani, Emanuele Frontoni, Adriano Mancini, Primo Zingaretti 1 D.I.I.G.A., Università Politecnica delle Marche, Ancona - Italy, {ascani,
Using Received Signal Strength Variation for Surveillance In Residential Areas
Using Received Signal Strength Variation for Surveillance In Residential Areas Sajid Hussain, Richard Peters, and Daniel L. Silver Jodrey School of Computer Science, Acadia University, Wolfville, Canada.
What Is an Electric Motor? How Does a Rotation Sensor Work?
What Is an Electric Motor? How Does a Rotation Sensor Work? Electric Motors Pre-Quiz 1. What is an electric motor? 2. Name two applications (things) you use every day that use electric motors. 3. How does
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
understanding sensors
The LEGO MINDSTORMS NXT 2.0 robotics kit includes three types of sensors: Ultrasonic, Touch, and Color. You can use these sensors to build a robot that makes sounds when it sees you or to build a vehicle
DESIGN OF A TOUCHLESS USER INTERFACE. Author: Javier Onielfa Belenguer Director: Francisco José Abad Cerdá
DESIGN OF A TOUCHLESS USER INTERFACE Author: Javier Onielfa Belenguer Director: Francisco José Abad Cerdá 1 OUTLINE 1. Problem to solve 2. Goal of the project 3. Connecting devices and the system 3.1 Microsoft
Network Robot Systems
Network Robot Systems Alberto Sanfeliu 1, Norihiro Hagita 2 and Alessandro Saffiotti 3 Guest Editors of the Special Issue on NRS 1 Institut de Robòtica I Informàtica Industrial (UPC-CSIC), Universitat
SECOND YEAR. Major Subject 3 Thesis (EE 300) 3 Thesis (EE 300) 3 TOTAL 3 TOTAL 6. MASTER OF ENGINEERING IN ELECTRICAL ENGINEERING (MEng EE) FIRST YEAR
MASTER OF SCIENCE IN ELECTRICAL ENGINEERING (MS EE) FIRST YEAR Elective 3 Elective 3 Elective 3 Seminar Course (EE 296) 1 TOTAL 12 TOTAL 10 SECOND YEAR Major Subject 3 Thesis (EE 300) 3 Thesis (EE 300)
Bachelor Degree in Informatics Engineering Master courses
Bachelor Degree in Informatics Engineering Master courses Donostia School of Informatics The University of the Basque Country, UPV/EHU For more information: Universidad del País Vasco / Euskal Herriko
Donaxi@HOME Project. Keywords: Hybrid Algorithm, Human-Face Detection, Tracking Unrestricted, Identification of People, Fall, dynamic and kinematic.
Donaxi@HOME Project Héctor S Vargas, Edson Olmedo, A Daniel Martínez, ML Mónica López, Esperanza Medina, José L Pérez, Damian Linares, Carlos Peto, Enrique R García, Víctor Poisot, Jaime Robles, Gerson
Go to contents 18 3D Visualization of Building Services in Virtual Environment
3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer
Introduction to Computer Networks and Data Communications
Introduction to Computer Networks and Data Communications Chapter 1 Learning Objectives After reading this chapter, you should be able to: Define the basic terminology of computer networks Recognize the
A Survey on Vision-based Fall Detection
A Survey on Vision-based Fall Detection Zhong Zhang, Christopher Conly, and Vassilis Athitsos Department of Computer Science and Engineering University of Texas at Arlington Arlington, Texas, USA [email protected],
Personal Health Care Management System Developed under ISO/IEEE 11073 with Bluetooth HDP
Vol.8, No.3 (2014), pp.191-196 http://dx.doi.org/10.14257/ijsh.2014.8.3.18 Personal Health Care Management System Developed under ISO/IEEE 11073 with Bluetooth HDP Am suk Oh 1, Doo Heon Song 2 and Gwan
Sensor Based Control of Autonomous Wheeled Mobile Robots
Sensor Based Control of Autonomous Wheeled Mobile Robots Gyula Mester University of Szeged, Department of Informatics e-mail: [email protected] Abstract The paper deals with the wireless sensor-based
A Novel Method to Minimize False Alarm Rate of Fall Detection for Wireless Sensor Networks
A Novel Method to Minimize False Alarm Rate of Fall Detection for Wireless Sensor Networks Jin Wang 1, Zhongqi Zhang 1, Liwu Zuo 1, Menglin Wu 2, Jeong-Uk Kim 3 1 Jiangsu Engineering Center of Network
C.I. La chaîne d information LES CAPTEURS. Page 1 sur 5
LES CAPTEURS C.I. La chaîne d information The Touch Sensor gives your robot a sense of touch. The Touch Sensor detects when it is being pressed by something and when it is released again. Suggestions for
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
Watch Your Garden Grow
Watch Your Garden Grow The Brinno GardenWatchCam is a low cost, light weight, weather resistant, battery operated time-lapse camera that captures the entire lifecycle of any garden season by taking photos
What makes a good coder and technology user at Mountfields Lodge School?
What makes a good coder and technology user at Mountfields Lodge School? Pupils who persevere to become competent in coding for a variety of practical and inventive purposes, including the application
CONTENTS. What is ROBOTC? Section I: The Basics
BEGINNERS CONTENTS What is ROBOTC? Section I: The Basics Getting started Configuring Motors Write Drive Code Download a Program to the Cortex Write an Autonomous Section II: Using Sensors Sensor Setup
A RFID Data-Cleaning Algorithm Based on Communication Information among RFID Readers
, pp.155-164 http://dx.doi.org/10.14257/ijunesst.2015.8.1.14 A RFID Data-Cleaning Algorithm Based on Communication Information among RFID Readers Yunhua Gu, Bao Gao, Jin Wang, Mingshu Yin and Junyong Zhang
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies
Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com Image
AUTOMATIC HUMAN FREE FALL DETECTION USING ANDROID
AUTOMATIC HUMAN FREE FALL DETECTION USING ANDROID Mrs.P.Booma devi 1, Mr.S.P.Rensingh Xavier 2 Assistant Professor, Department of EEE, Ratnavel Subramaniam College of Engineering and Technology, Dindigul
Fall Alarm and Inactivity Detection System Design and Implementation on Raspberry Pi
Fall Alarm and Inactivity Detection System Design and Implementation on Raspberry Pi Qifan Dong*, Yang Yang*, Wang Hongjun*, Xu Jian-Hua** * School of Information Science and Engineering, Shandong University,
Application of Street Tracking Algorithm to Improve Performance of a Low-Cost INS/GPS System
Application of Street Tracking Algorithm to Improve Performance of a Low-Cost INS/GPS System Thang Nguyen Van Broadcasting College 1, Phuly City, Hanam Province, Vietnam Email: [email protected]
REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4
REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 S. Heymann, A. Smolic, K. Mueller, Y. Guo, J. Rurainsky, P. Eisert, T. Wiegand Fraunhofer Institute
A Remote Maintenance System with the use of Virtual Reality.
ICAT 2001 December 5-7, Tokyo, JAPAN A Remote Maintenance System with the use of Virtual Reality. Moez BELLAMINE 1), Norihiro ABE 1), Kazuaki TANAKA 1), Hirokazu TAKI 2) 1) Kyushu Institute of Technology,
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
