Applications of Human Motion Tracking: Smart Lighting Control

Similar documents
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

A System for Capturing High Resolution Images

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

Template-based Eye and Mouth Detection for 3D Video Conferencing

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Computer Vision. Color image processing. 25 August 2014

Low-resolution Character Recognition by Video-based Super-resolution

The Visual Internet of Things System Based on Depth Camera

Advanced Surveillance Systems: Combining Video and Thermal Imagery for Pedestrian Detection

Speed Performance Improvement of Vehicle Blob Tracking System

A Surveillance Robot with Climbing Capabilities for Home Security

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

Basler. Line Scan Cameras

Building an Advanced Invariant Real-Time Human Tracking System

Assessment of Camera Phone Distortion and Implications for Watermarking

Mouse Control using a Web Camera based on Colour Detection

Product specifications

Fall detection in the elderly by head tracking

The infrared camera NEC Thermo tracer TH7102WL (1) (IR

Tracking performance evaluation on PETS 2015 Challenge datasets

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

A Review of Security System for Smart Home Applications

Tracking and Recognition in Sports Videos

Automated Monitoring System for Fall Detection in the Elderly

Real Time Target Tracking with Pan Tilt Zoom Camera

The Advantages of Using a Fixed Stereo Vision sensor

Application of Virtual Instrumentation for Sensor Network Monitoring

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Development of an Automotive Active Safety System Using a 24 GHz-band High Resolution Multi-Mode Radar

Laser Gesture Recognition for Human Machine Interaction

Android based Alcohol detection system using Bluetooth technology

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

OptimizedIR in Axis cameras

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Fast Subsequent Color Iris Matching in large Database

Colour Image Segmentation Technique for Screen Printing

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

COMPACT GUIDE. Camera-Integrated Motion Analysis

Multivariate data visualization using shadow

Megapixel PoE Day / Night Internet Camera TV-IP572PI (v1.0r)

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com

White paper. In the best of light The challenges of minimum illumination

AirCam OD-325HD-2.5MM

Digital Photography 1

SECURITY CAMERA INSTRUCTION MANUAL ENGLISH VERSION 1.0 LBC

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control

Understanding Network Video Security Systems

Automatic Traffic Estimation Using Image Processing

CCID1410-ST 1/4" VGA IP PTZ Colour Dome Camera

Method for Traffic Flow Estimation using Ondashboard

Analecta Vol. 8, No. 2 ISSN

2MP H.264/ MPEG-4/ MJEPG

Removing Moving Objects from Point Cloud Scenes

Video Analytics A New Standard

An Efficient Geometric feature based License Plate Localization and Stop Line Violation Detection System

Original Research Articles

An Active Head Tracking System for Distance Education and Videoconferencing Applications

ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM

Shutter Speed in Digital Photography

CS 534: Computer Vision 3D Model-based recognition

IVA 5.60 Intelligent Video Analysis. Operation Manual

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

Large-Area Wireless Sensor System for Ambient Assisted Living

Development of Integrated Management System based on Mobile and Cloud service for preventing various dangerous situations

Signature Region of Interest using Auto cropping

White paper. CCD and CMOS sensor technology Technical white paper

Neural Network based Vehicle Classification for Intelligent Traffic Control

OD-325HD-2.5MM. H.264 MegaPixel Outdoor 25M IR Night vision POE Camera. H.264 Compression. IP66 Waterproof Resistance

Mean-Shift Tracking with Random Sampling

AUDIO. 1. An audio signal is an representation of a sound. a. Acoustical b. Environmental c. Aesthetic d. Electrical

Projection Center Calibration for a Co-located Projector Camera System

Print then Cut Calibration

Virtual Mouse Using a Webcam

Product Characteristics Page 2. Management & Administration Page 2. Real-Time Detections & Alerts Page 4. Video Search Page 6

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Operating instructions II / 2004

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

Edge tracking for motion segmentation and depth ordering

EMR-9 Quick Start Guide (00175)D 1

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

System-Level Display Power Reduction Technologies for Portable Computing and Communications Devices

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Implementation of OCR Based on Template Matching and Integrating it in Android Application

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

Object tracking & Motion detection in video sequences

Robust and accurate global vision system for real time tracking of multiple mobile robots

A Hybrid Software Platform for Sony AIBO Robots

A Method of Caption Detection in News Video

Multi-Player Virtual Ping-Pong Game

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213

Get started. Hang a green screen. Set up your lighting

Design of Media measurement and monitoring system based on Internet of Things

Representing Vector Fields Using Field Line Diagrams

Transcription:

2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Applications of Human Motion Tracking: Smart Lighting Control Sung Yong Chun Yeungnam University 214-1 Dae-dong Gyeongsang-si, Gyeongsangbook-do, Rep. of Korea whiteyongi@ynu.ac.kr Chan-Su Lee Yeungnam University 214-1 Dae-dong Gyeongsang-si, Gyeongsangbook-do, Rep. of Korea chansu@ynu.ac.kr Abstract This paper presents a smart lighting control system based on human motion tracking. Proper illumination and color temperature depend on human activities. A smart lighting system that provides automatic control of lighting illumination and color temperature needs to track human motion and understand human activities. Infrared and thermal spectrum provides useful information robust to the lighting condition. Depth information can be acquired independently of the lighting condition and it is relatively easy to detect humans independent of their clothing, and skin color. Commercial depth cameras or thermal cameras were used for accurate tracking and for estimating human behavior. The activity modes can be estimated using the human motion tracking results from depth cameras and from thermal cameras. Multiple depth cameras were used to detect human subject motion in a large area. The activity modes such as study mode and watching TV mode were estimated and the illumination and color temperature of the LED lighting system were controlled in real time according to the estimated activity. 1. Introduction Recently, the demands for energy savings in general lighting, which covers approximately 20% of all electricity consumption, have increased. Conventional lighting bulbs are being replaced with LED lighting devices because LED light consumes less energy and is easier to control than conventional lighting. Therefore, there have been many research activities to achieve energy savings using LED lighting, smart sensors, and intelligent controls. Sensor belief networks and its extensions are developed [2, 4] to provide efficient lighting control from multiple sensor inputs. Illumination is optimized, and responsive control is generated by fusing the emerging sensors and IT technologies [6, 7]. The optimal intensity and color are calculated by incorporating the illuminance feedback from a portable, photosensitive device [1]. The estimated human location using ultra sonic sensors or IR sensors is used to control the LED lighting ON/OFF cycle [3]. Although required lighting condition varies according to the activity [5], it is difficult to estimate accurate location, movement trajectories, and the number of people in the space under an illumination change. Camera systems beyond visible spectrum are used in this study for accurate tracking of human motion and estimation of human activities. The lighting illumination is difficult to control using conventional 2D cameras because they are sensitive to changes in lighting conditions. Depth information and thermal information, however, can be acquired independently to a change in lighting condition from a depth camera like Kinect R and from a thermal camera. They are relatively easy to detect human independent of clothing, skin colors, and background conditions. Therefore, depth cameras or thermal cameras are used because they are robust to a change in lighting condition and they can detect and estimate human locations more accurately than other sensors such as conventional image sensors and ultra-sonic sensors. In this paper, the tracking of multiple human location and lighting control is implemented based on the estimated location of people in the living room. Multiple cameras are used to track people in large areas. An image map projection is used for the combined representation of people detected from multiple cameras. Person identification across multiple cameras is achieved by comparing the proximity, clothing colors, and height of the tracking subject. From the tracking location, activities are estimated and lighting conditions are controlled according 978-0-7695-4990-3/13 $26.00 2013 IEEE DOI 10.1109/CVPRW.2013.65 387

to estimated activity modes. The proposed system provides convenient lighting control and energy savings by providing proper lighting conditions according to human activities and the distance to specified activity areas. 1.1. Procedure of smart lighting control The following procedures are performed to control smart lighting efficiently. The first step is to capture human motions from depth cameras or from thermal cameras. Each depth camera provides the distance to objects with 320 x 240 resolution using Kinect R s. The second step is to detect and track human locations from the extracted head location after person detection. Thermal camera can also be used to extract human location similarly. More than one person can be detected simultaneously. From each detected person location, a global map is generated to track human activity across multiple cameras and to control lighting conditions in large area. The control parameters for illumination are calculated according to distance based on the lighting location, estimated activity mode, and person location. The control signals are sent to a lighting controller and actual lighting control is achieved using the DMX 512 protocol. Figure 1 shows the sequence for smart lighting control. To control a lighting system according to human activities, depth cameras or thermal cameras are used for human detection and tracking. Two Kinect R cameras are installed on the ceiling of the test-room. Depth information is relatively robust to changes in illumination. The distance to objects is measured, which is quite useful for removing the background and detecting people. 3m x 5m areas are covered using two Kinect R depth cameras. After detecting the moving object area from depth camera, the head location is estimated to determine the accurate location of tracking people. Features such as height, and moving direction are used to identify the person when crossing people in the same areas and in different cameras. Map images are used to control the lighting according to the specific location of the subject. After detecting human location from depth camera as well as thermal camera, the following procedures to control lighting are similar. 2.1. Human Detection and Location Estimation Human detection and location estimation from a depth camera: Depth information is used to detect human subject. Because background objects are not needed in human detection, background depth pixel is removed using distance threshold values. After removing the background objects, the connected components are detected to evaluate the foreground blobs. Each human location is estimated in the image space through the center of each foreground blob component. Figure 2 shows detection of the person from depth images. (d) Figure 1. Procedure of smart lighting control 2. Tracking People and Activity Estimation Figure 2. Human detection from a depth camera: Original depth image, Depth image after removing the background, Detected connected component, (d) Detected person s bounding box and center location of the detected person. Human location is not the center of the foreground block. The head location may represent more accurate location of the detected person in the XY plane (Z axis is the vertical direction in the three dimensional space) when people sit, stand, and walk in the living room. The head area is close to the camera, which is attached to the ceiling. We estimate 388

head area based on the cumulative density estimation function of the histogram of the measured foreground distance values. We use nearest 10% area of the detected foreground as head area and estimate the center location of the area as head location. Figure 3 shows an example of extracted head area and estimated human location from the head area. Figure 4. Human detection and control of lighting from thermal camera: Original thermal image, Depth human and its center, Lighting control based estimated person location from thermal camera. Figure 3. Estimation of human location from head area: Estimated head area from histogram, Estimated human location(green) from head area. Depth histogram distribution. Human detection and location estimation from a thermal camera: Thermal information is used to detect human subject similar to the depth information. Thermal image usually detects wave length close to human body temperature. Therefore, the body are relatively high response than the background objects. A fixed threshold value provides foreground segmented image that can be used for the calculation of the center of human location after removing noise using low pass filter. Connected components larger than a predefined threshold value are counted as human locations. The gravity center of a foreground connected component is counted as the center of the detected person. Since the head area is not easily segmented in the thermal image, whole body area is used and the result provides robust estimation of the body location. Figure 4 shows an original thermal image, estimated human location, and lighting control. Shoulder detection and heading direction estimation: Heading direction is very useful for smart lighting control based on human locomotion. The forward directional lighting has to be brighter than the rear lighting to the walking person. The heading direction can be estimated by detected shoulders because the heading direction is orthogonal to the plane that pass the center of each shoulder. Shoulder center can be estimated based on the bounding box and their contact points after removing head area. The two contact points along the larger side in the bounding rectangle are counted as shoulder edges. Figure 5 shows examples of detected shoulders. (d) Figure 5. Estimation of shoulder for heading direction 2.2. Global Trajectory Estimation and Tracking using Map Image For efficient representation of the detected person and for easy control of the camera location according to the room layout, the captured depth map images are transformed into a two dimensional plane. The trajectories from multiple cameras were projected onto a global trajectory map using perspective transformation. The detected person is also represented in the plane. This projected plane is called the map image. Figure 6 shows the perspective transformation and detected person representation in the map image. For an accurate estimation of the trajectories of multiple persons, it is important to identify each individual from the detected head location. Basically the proximity provides the identity of the moving person trajectories. On the other 389

Figure 6. Transformation of the captured image and a representation of the person in the map image: Perspective transformation, Map representation of a detected person. hand, the proximity will not provide the best identify verification in the case of crossing the person s trajectory or in the case of motion across different cameras. Therefore, the center location of the detected person, the height of the person, and the color of the clothes are used to identify the person. For clothes color, the histogram of hue and saturation (HS) values in the HSV color space is used to compare clothes color relatively independent of a change in illumination. The extracted feature vectors are used to compare and identify different people. If the distance (error) is larger than predefined threshold, a new person ID is provided because it assumes that the detected person is a new person. Equation 2 measures position distance between ith person at time t and i 1th person in the previous time t 1. Similarly Equation 3 represents the height difference of each person based on minimum depth distance from cameras. Equation 4 is comparing histogram distribution of the clothes color of detected person i at time t and j at time t 1. Overall distance is then the square root value of the weighted sum of the distance square for each feature as shown in Equation 1. d ij (t) = w 1 d ij p (t) 2 + w 2 d ij h (t)2 + w 3 d ij H (t) (1) d ij p (t) = (p i (t) p j (t 1)) (2) d ij h (t) = (hi (t) h j (t 1)) (3) d ij H (t) = χ2 (H i (t),h j (t 1)) (4) χ 2 (H i,h j ) = H i (I) H j (I) 2 H i (5) (I) I { ID(d i j if d ) = = min i (d ij ) and d ij <T, (6) new ID, otherwise, where i =1,,N t, j =1,,N t 1, N t is number of detected person at time t, T is threshold value to be considered the same person. 3. Smart Lighting Control System A smart lighting room equipped with two cameras and twelve LED flat panels are installed in the ceiling of a room. Full color LED flat panel lightings are used to control the color temperature and illuminance. The room is equipped with a TV set, a table and a desk and is controlled by human activities. If a person remains near the desk, it is assumed that the person is studying and the lighting control mode is changed to study mode. If the person sits on a sofa, which is in front of the TV set, the lighting mode is called watching TV mode. If two people sit on the sofa, the lighting mode is dialog mode, otherwise people are assumed to be in walking around mode. Lighting conditions of all these modes are defined according to the activity mode area from the detected person. Figure 7 shows a schematic diagram of the smart lighting living room used in these experiments. 3.1. Activity mode The activity modes can be divided into four types: walking around, studying, dialog, and watching TV. Twelve color LED flat panels can be controlled by color chromaticity, color temperature, and illuminance. The lighting condition of each lighting panel can be controlled using DMX512 protocol. The average illuminance was set 400 lx, and the lighting was controlled as follows for each activity as shown in Figure 7. The color temperature and illuminance of each activity mode are decided based on the preference according to the activity in [5]. Details of each activity specification and its lighting condition are as follows: Moving (walking around) mode: Closest lighting panel is turned to a 6500K color temperature Study mode: When a participant remains near the desk, the lighting panel located above the desk is turned into a 7000K color temperature Dialog mode: When two persons remain near the table, the nearest lighting panel is turned to a 5000K color temperature Watching TV mode: When participants remain near the TV, all the lighting panels colors are synchronized with the dominant colors of the TV screen images. A web camera is used to capture the TV screen images. The dominant color of the captured screen is extracted after converting the color space from RGB to HSV color space, which separates the chromaticity from the brightness and helps extracting the accurate color information robust to the lighting illumination change. 3.2. Lighting Control To control the lighting condition according to the activity mode, a 34W full color LED flat panel lighting was developed with a size 300mm x 300mm using 100 LEDs. Lighting control driver with the three channels was used to control the full color LED flat panel lighting by DMX512 protocols. 390

(d) (e) Figure 7. Schematic diagram of the smart lighting living room, and the lighting panels and furniture locations on the global map and captured sample scenes in different activity modes: A schematic diagram of the smart lighting living room, Global and lighting source location, and furniture location, Sample scenes of lighting control in different modes; left:study mode, middle: watching TV mode, right: walking around mode. around mode is selected. In the working around model, the lighting was controlled based on the location of the person within a predefined distance. Figure 8 shows an example of distance-based lighting control. The distance from the center of the activity area to the person location is used as a control parameter in addition to the activity mode. The effect of the activity-specific lighting condition is proportional to the distance from the middle of activity lighting to the location of the person. Using this proximal information, more than one person in different activities can be controlled easily by the superposition of each activity mode control according to the distance to each subject. Figure 8. Distance-based lighting control The lighting location and activity areas were defined in advance. If the estimated person location is within the range of the specific area, the lighting is controlled according to the predefined lighting conditions. Otherwise, walking 3.3. Evaluation of the energy saving of smart lighting control using the distance from activity area The performance of energy savings was evaluated based on the proposed smart lighting control. Because the energy used is dependent on the time of the activity mode, the performance of activity lighting with an equal time and the av- 391

(d) Figure 9. Evaluation of the energy saving based on the activity mode evaluation: walking around mode, dialogue mode, study mode, (d) watching TV mode. Table 1. Energy saving in different modes Mode Method Color Average Energy temperature Power(W) Saving(%) Idle - - 33.3 - Study normal 7000K 177.2 Study proposed 7000K 85.6 51.69 Walk normal 6500K 163 Walk proposed 6500K 89.1 45.34 Dialog normal 5000K 175.6 Dialog proposed 5000K 109.3 37.76 erage estimated expected energy saving was evaluated. The actual energy saving can vary according to the time spent in each activity. Table 1 shows the quantitative evaluation results of energy savings when energy consumption is measured for ten minutes for each activitiy. Figure 9 shows the scenes of lighting control according to the recognized activity. 3.4. Lighting control based on thermal array sensors For thermal imaging and easy development of the application, Hemiann thermopile arrays with 32 31 elements were used. The device with its microcontroller can communicate via RJ45/Ethernet to a PC and can be used real-time control of lighting based on detected human location. The power consumption of the thermopile is much lower than the depth camera; This thermopile consumes 3.5W whereas the Kinect R depth camera consumes 15W that is about five times higher energy usage. Thermal image can be captured by 20 frames per second. The procedure to control lighting was similar to the case of depth camera except the detection procedure of human and its head in depth camera. 4. Conclusion This paper presented a smart lighting control system using depth cameras or thermal camera by estimating the location of multiple persons and the activity according to the estimated location, and the lighting is controlled according to the activity mode. Using this activity aware control of LED lighting, the system also achieves energy savings by automatically control color temperature and illumination according to activities and by reducing the lighting intensities when the person is far from the lighting sources. Acknowledgements: This research was financially supported by the Ministry of Education, Science Technology (MEST) and National Research Foundation of Korea(NRF) through the Human Resource Training Project for Regional Innovation and by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (2012R1A1B4003830). References [1] M. Aldrich, N. Zhao, and J. Paradiso. Energy efficient control of polychromatic solid state lighting using a sensor network. In Proc. of SPIE, volume 7784, 2010. [2] R. H. Dodier, G. P. Henze, D. K. Tiller, and X. Guo. Building occupancy detection through sensor belief networks. Energy and Buildings, 38(9):1033 1043, 2006. [3] D. Kim, J. Lee, Y. Jang, and J. Cha. Smart led lighting system implementation using human tracking us/ir sensor. In Proc. Int. Conf. on ICT Convergence (ICTC). [4] S. P. Meyn, A. Surana, Y. Lin, S. M. Oggianu, S. Narayanan, and T. A. Frewen. A sensor-utility-network method for estimation of occupancy in buildings. In Proc. of IEEE Conference on Decision and Control (CDC), pages 1494 1500, 2009. [5] N. Oi and H. Takahashi. Preferred combinations between illuminance and color temperature in several settings for daily living activities. In Proc. of International Symposium on Design of Artificial Environments, pages 214 215, 2007. [6] J. A. Paradiso, M. Aldich, and N. ZHao. Energy-efficient control of solid-state lighting. SPIE Newsroom, 2011. [7] Y.-J. Wen, J. Bonnell, and A. M. Agogino. Energy conservation utilizing wireless dimmable lighting control in a sharedspace office. In Proc. of the Annual Conference of the Illuminating Engineering Society, 2008. 392