THE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT



Similar documents
DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

Analecta Vol. 8, No. 2 ISSN

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Original Research Articles

An Active Head Tracking System for Distance Education and Videoconferencing Applications

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Human Motion Tracking for Assisting Balance Training and Control of a Humanoid Robot

Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm

AUDIO POINTER V2 JOYSTICK & CAMERA IN MATLAB

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Video-Based Eye Tracking

Privacy Preserving Automatic Fall Detection for Elderly Using RGBD Cameras

Tracking and Recognition in Sports Videos

How does the Kinect work? John MacCormick

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Automatic Traffic Estimation Using Image Processing

T-REDSPEED White paper

Colorado School of Mines Computer Vision Professor William Hoff

Low-resolution Character Recognition by Video-based Super-resolution

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control

A System for Capturing High Resolution Images

HYDRAULIC ARM MODELING VIA MATLAB SIMHYDRAULICS

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

TRENTINO - The research, training and mobility programme in Trentino - PCOFUND-GA

Interactive Computer Graphics

Wii Remote Calibration Using the Sensor Bar

Vision-based Walking Parameter Estimation for Biped Locomotion Imitation

Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database


Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

Revistas IEEE ANII 2009

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Mouse Control using a Web Camera based on Colour Detection

Master Thesis Using MS Kinect Device for Natural User Interface

The Visual Internet of Things System Based on Depth Camera

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

Product Introduction MyoMuscle. Telemyo DTS-System. Telemetry system for EMG and Biomechanical Sensors.

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

Real time vehicle detection and tracking on multiple lanes

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Removing Moving Objects from Point Cloud Scenes

Kinect Interface to Play Computer Games with Movement

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors

Building an Advanced Invariant Real-Time Human Tracking System

Projection Center Calibration for a Co-located Projector Camera System

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

Accuracy of joint angles tracking using markerless motion system

A Hybrid Software Platform for Sony AIBO Robots

GLOVE-BASED GESTURE RECOGNITION SYSTEM

A Real-Time Fall Detection System in Elderly Care Using Mobile Robot and Kinect Sensor

ROBOTIKY V BUDÚCNOSTI

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

MoveInspect HF HR. 3D measurement of dynamic processes MEASURE THE ADVANTAGE. MoveInspect TECHNOLOGY

Introduction to the Perceptual Computing

VOLUMNECT - Measuring Volumes with Kinect T M

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

Fall Detection System using Accelerometer Principals with Arduino Development Board

Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification

A method of generating free-route walk-through animation using vehicle-borne video image

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

Accelerometer Based Real-Time Gesture Recognition

Computer Vision for Quality Control in Latin American Food Industry, A Case Study

CS231M Project Report - Automated Real-Time Face Tracking and Blending

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Basler. Line Scan Cameras

Signature Region of Interest using Auto cropping

Vision based Vehicle Tracking using a high angle camera

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Introduction to Robotics Analysis, Systems, Applications

Tracking in flussi video 3D. Ing. Samuele Salti

Unit 1: INTRODUCTION TO ADVANCED ROBOTIC DESIGN & ENGINEERING

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

Next Generation Natural User Interface with Kinect. Ben Lower Developer Community Manager Microsoft Corporation

Introduction to Computer Graphics

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

ELG 5121 / CSI7631 Multimedia Communications

Frequently Asked Questions

Robust and accurate global vision system for real time tracking of multiple mobile robots

How To Filter Spam Image From A Picture By Color Or Color

Neural Network based Vehicle Classification for Intelligent Traffic Control

Sensory-motor control scheme based on Kohonen Maps and AVITE model

Cost Effective Automated Test Framework for Touchscreen HMI Verification

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

Gesture-Based Human-Computer-Interaction Using Kinect for Windows Mouse Control and PowerPoint Presentation

FSI Machine Vision Training Programs

Transcription:

THE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT A. Procházka 1,O.Vyšata 1,2,M.Vališ 1,2, M. Yadollahi 1 1 Institute of Chemical Technology, Department of Computing and Control Engineering 2 Charles University, Faculty of Medicine in Hradec Králové, Department of Neurology Abstract The three-dimensional modelling presents a very important tool for analysis of the movement both in medicine and engineering. While video sequences acquired by a complex camera system enable a very precise data analysis it is possible to use much more simple technical devices to analyse video frames with the sufficient accuracy. The use of MS Kinect allows a simple 3-D modelling using image and depth sensors for data acquisition with the precision of 4-40 mm depending upon the distance from the sensor. The matrix of 640 x 480 pixels can be then used for the spatial modelling of a moving body. The goal of the paper is to present (i) the use of MS Kinect for data acquisition, (ii) data processing in the MATLAB environment, and (iii) application of the 3D skeleton model in biomedicine. 1 Introduction The three-dimensional data acquisition forms a very important research area related to space modelling with many applications in engineering and biomedicine. While synchronized camera systems [3, 12, 10] allow precise but also expensive technical solution it is possible to use much less expensive systems based upon application of the depth sensors use. The MS Kinect allows to acquire such data sets and their processing in the mathematical environment of MATLAB to enable movement analysis, gesture recognition and robotic systems control [16, 15, 4, 6]. This paper is devoted to the use of the MS Kinect system for the gait data acquisition, for analysis of movement [1, 9, 5] and gait disorders including the study of the Parkinson s disease. Selected mathematical methods are applied for the motion tracking and classification of observed sets of individuals. 2 Methods 2.1 Data Acquisition Fig. 1 summarizes the principle used for the gait analysis of a selected frame. Fig. 1(a) presents location of MS Kinect sensors used to record separate image frames. The depth sensor consists of an infrared projector (on the left) and an infrared camera (on the right) that uses the structured light principle [17, 11] allowing the detection of the distance of object components. The latest image acquisition [7, 14] toolbox includes tools for the direct interconnection of the MS Kinect and Matlab environment for processing of data acquired using standard tools used for hardware devices connected to PC through the USB. The IMAQHWINFO provides the following information: 1 >> h=imaqhwinfo( kinect,1) 2 3 DefaultFormat: RGB_640x480 4 DeviceFileSupported: 0 5 DeviceName: Kinect Color Sensor 6 DeviceID: 1 7 VideoInputConstructor: videoinput( kinect, 1) 8 VideoDeviceConstructor: imaq.videodevice( kinect, 1) 9 SupportedFormats: {1x4 cell}

10 11 >> h=imaqhwinfo( kinect,2) 12 13 DefaultFormat: Depth_640x480 14 DeviceFileSupported: 0 15 DeviceName: Kinect Depth Sensor 16 DeviceID: 2 17 VideoInputConstructor: videoinput( kinect, 2) 18 VideoDeviceConstructor: imaq.videodevice( kinect, 2) 19 SupportedFormats: {1x3 cell} Both the RGB camera and the depth sensors store information in matrices of size 640x480. The guaranteed and verified accuracy of the depth sensor is changing from 4 mm to approximately 40 mm, depending on the distance from the sensor [2, 8, 13]. Data of the Videoinput class can be further followed and separate snapshots obtained and visualized by the following commands: 1 % Image Sensor Data Acquisition 2 imagevid = videoinput( kinect,1); 3 preview(imagevid); 4 depthmap = getsnapshot(imagevid); 5 imshow(imagemap) 6 stoppreview(imagehvid) 7 % Depth Sensor Data Acquisition 8 depthvid = videoinput( kinect,2); 9 preview(depthvid); 10 depthmap = getsnapshot(depthvid); 11 imshow(depthmap) 12 stoppreview(depthvid) Matrices of image data and depth sensor data can be further processed. A special interest should be given to the DepthMap matrix containing information of the depth sensor associated with each image element. Fig. 1(c) presents a selected image sensor frame while Fig. 2 shows the difference of subsequent image frames to reject static parts of the scene observed. It is necessary to have in mind that this difference is zero for static parts but with errors given by the accuracy of Kinect sensors. Fig. 2(c) presents this difference for a selected scene area and image data. Similar analysis prove the assumed accuracy of the depth sensor as well. Figure 1: MS Kinect use for 3D modelling presenting (a) MS Kinect sensors, (b), (c) the depth sensor map, (d) selected RGB camera frame, and (e) the 3D skeleton model in MATLAB

(a) SELECTED IMAGE FRAME (b) FRAME DIFFERENCE x 10 4 (c) IMAGE NOISE DISTRIBUTION 4 2 0 30 20 10 0 10 20 30 Percentage noise values Figure 2: MS Kinect data analysis presenting (a) the selected frame acquired by the image sensor, (b) the difference of selected subsequent image frames, and (c) the distribution of image noise in the selected static region 2.2 Skeleton Tracking The skeleton tracking algorithm process these data to provide information about the location of the joints specified in Fig. 1(c) using joint numbering and the connection map presented in Table 1. Table 1: Skeleton positions and connection map of video records Skeleton Positions Connection Map Joint No. Joint No. Part Connection Vectors hip center 1 wrist right 11 spine [1 2],[2 3],[3 4] SKELETON MAP spine 2 hand right 12 4 3 shoulder center 3 hip left 13 left hand [3 5],[5 6],[6 7],[7 8] 5 9 head 4 knee left 14 6 2 10 shoulder left 5 angle left 15 right hand [3 9],[9 10],[10 11], 13 17 1 7 elbow left 6 foot left 16 [11 12] 8 wrist left 7 hip right 17 right leg [1 17],[17 18],[18 19], 14 18 hand left 8 knee right 18 [19 20] 15 shoulder right 9 ankle right 19 left leg [1 13],[13 14],[14 15], 16 elbow right 10 foot right 20 [15 16] Data analysis is based upon the detection of image components. While it is possible to propose own algorithms we used the standard method for detection of joints and the skeleton tracking in M cycles by the following commands: 11 12 19 20 1 start(depthvid); start(colorvid); 2 for i=1:m 3 trigger(depthvid); trigger(colorvid); 4 image = getdata(colorvid); 5 [depthmap,~,depthmetadata] = getdata(depthvid); 6 skeletonjoints =... 7 depthmetadata.jointimageindices(:,:,depthmetadata.isskeletontracked); 8 R{i}=skeletonJoints; 9 end Information saved in the cell variable R is then used for further data analysis.

3 Results Skeleton data acquired by the MS Kinect system were further used for gait analysis with the sampling rate of 30 frames per second. Signal processing included digital data filtering to reduce observation errors using the evolution of mass centers presented in Fig. 3 for a selected individual and chosen central joints 1-3. Fig. 4 presents the time evolution of centers of mass of the right leg (19-20) and left leg (15-16) allowing the evolution of their distance with respect to the walk length. The stride length was then evaluated using maxima of distances between legs. CENTER OF MASS TIME EVOLUTION 0.36 0.34 z [m] 0.32 0.3 0.4 0.2 0 0.2 0.4 x [m] 0.6 2.45 2.4 2.35 y [m] 2.3 2.25 2.2 2.15 2.1 Figure 3: The evolution of the mass center evaluated for selected joints during a walk of a chosen individual CENTER OF LEFT AND RIGHT LEG TIME EVOLUTION DISTANCE BETWEEN LEGS 0.55 0.5 0.8 0.82 0.84 Distance [m] 0.45 z [m] 0.86 0.4 0.5 0.88 0.9 0 0.35 x [m] 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 y [m] 0.5 0 0.2 0.4 0.6 0.8 1 1.2 Walk length [m] Figure 4: The evolution of the mass centers of individual legs and their distance used for the stride length evaluation The experimental part of the study was devoted to the gait analysis of two sets of individuals observed in the Movement Disorders Center of the Faculty Hospital in Hradec Králové. The first set included 20 patients with the Parkinson s disease (PD) and the second set of 20 healthy age-matched individuals. The MS Kinect was installed about 60 cm above the floor to record the straight walk of each individual. The walk was approximately 4 m long repeated 5 times. A special attention was paid to the removal of gross errors during turns of individuals. Results proved the substantial difference between the average stride length of individuals with the Parkinson s disease and that of age-matched control. 4 Conclusion Results obtained confirm that the MS Kinect device can be used for gait analysis with the sufficient accuracy and it can replace expensive camera systems in many cases. But it is necessary to choose the appropriate mathematical processing methods to reduce errors of the image and depth sensors. Further analysis will be devoted to more accurate joints detection as well.

Acknowledgments The work has been supported by the research grant of the Faculty of Chemical Engineering of the Institute of Chemical Technology, Prague No. MSM 6046137306. References [1] M. Gabel, E. Renshaw, A. Schuster, and R Gilad-Bachrach. Full Body Gait Analysis with Kinect. In International Conference of the IEEE Engineering in Medicine and Biology, 2012. [2] K. Khoshelham and S. O. Elberink. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors, 12:1437 1454, 2012. [3] J. Lasenby and A. Stevenson. Using Geometric Algebra for Optical Motion Capture. In E. Bayro-Corrochano and G. Sobcyzk, editors, Applied Clifford Algebras in Computer Science and Engineering. Birkhauser, Boston, U.S.A., 2000. [4] Yi Li. Multi-scenario Gesture Recognition Using Kinect. In The 17th International Conference on Computer Games, pages 126 130, 2012. [5] U. Lindemann, M. Jamour, S. E. Nicolai, P. Benzinger, J. Klenk, K. Aminian, and C. Becker. Physical activity of moderately impaired elderly stroke patients during rehabilitation. Physiological Measurement, 33(11):1923, 2012. [6] E. Machida, M. Cao, T. Murao, and H. Hashimoto. Human Motion Tracking of Mobile Robot with Kinect 3D Sensor. In SICE Annual Conference, pages 2207 2211. Akita University, Japan, 2012. [7] The MathWorks, Inc., Natick, Massachusetts 01760. Image Acquisition Toolbox, 2013. [8] S. Obdrzalek, G. Kurillo, F. Ofli, R. Bajcsy, E. Seto, H. Jimison, and M. Pavel. Accuracy and Robustness of Kinect Pose Estimation in the Context of Coaching of Elderly Population. In The 34th Annual International Conference of the IEEE EMBS, pages 1188 1193. IEEE USA, 2012. [9] J. Preis, M. Kessel, M. Werner, and C. Linnhoff-Popien. Gait recognition with kinect. In 1st International Workshop on Kinect in Pervasive Computing, Newcastle, pages P1 P4, 2012. [10] A. Procházka, M. Kubíček, and A. Pavelka. Multicamera Systems in the Moving Body Recognition. In The 48th Int. Symposium ELMAR 2006. IEEE, 2006. [11] S. Qin, X. Zhu, and Y. Yang. Real-time Hand Gesture Recognition from Depth Images Using Convex Shape Decomposition Method. Journal of Signal Processing Systems, 2013. [12] M. Ringer and J. Lasenby. Multiple Hypothesis Tracking for Automatic Optical Motion Capture. Lecture Notes in Computer Science, 2350, 2002. [13] J. Smisek, M. Jancosek, and T. Pajdla. 3D with Kinect. In IEEE International Conference on Computer Vision, pages 1154 1160. IEEE USA, 2012. [14] B. Tannenbaum. Private communication. 2013. [15] Y. Wang, C. Yang, X. Wu, S. Xu, and H. Li. Kinect Based Dynamic Hand Gesture Recognition Algorithm Research. In The 4th International Conference on Intelligent Human- Machine Systems and Cybernetics, pages 274 279, 2012.

[16] C. Zhang, J. Xu, N. Xi, Y. Jia, and W. Li. Development of an omni-directional 3D camera for robot navigation. In The 2012 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pages 262 267. IEEE USA, 2012. [17] Z. Zhang. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. IEEE Multimedia, 19(2):4 10, 2012. Prof. Aleš Procházka, Ing. Mohammadreza Yadollahi Institute of Chemical Technology, Prague Department of Computing and Control Engineering Technická 1905, 166 28 Prague 6 Phone: +420-220 444 198 E-mail: A.Prochazka@ieee.org, Mohammadreza1.Yadollahi@vscht.cz, Ondrej.Tupa@vscht.cz MUDr. Oldřich Vyšata, Ph.D., Assoc. Prof. Martin Vališ, Ph.D. Charles University, Prague Faculty of Medicine in Hradec Králové Department of Neurology Sokolská 581, 500 05 Hradec Králové Phone: +420-495835200 E-mail: Vysatao@gmail.com, Valismar@seznam.cz