DigiLog Space Generator for Tele-collaboration in an Augmented Reality Environment

Size: px
Start display at page:

Download "DigiLog Space Generator for Tele-collaboration in an Augmented Reality Environment"

Transcription

1 DigiLog Space Generator for Tele-collaboration in an Augmented Reality Environment Kyungwon Gil, Taejin Ha, Woontack Woo KAIST UVR Lab, Daejeon , S. Korea {kgil, taejinha, Abstract. Tele-collaboration can allow users to connect with a partner or their family in a remote place. Generally, tele-collaborations are performed in front of a camera and screen. Due to their fixed positions, these systems have limitations for users who are moving. This paper proposes an augmentedreality based DigiLog Space Generator. We can generate interested space and combine remote space in real time ensuring movement. And our system uses reference object to calculate scale of space and coordinates. Scale and coordinates are saved at Database(DB) and used for realistic combination of space. DigiLog Space Generator is applicable to many AR applications. We discuss the experiences and limitations of our system. Future research is also described. Keywords: Augmented Reality; Tele-collaboration; Human-Computer Interaction; 1 Introduction Tele-collaboration technology aims at connecting users in different locations through a network to achieve common goals efficiently by working together. Typical telecollaboration at present involves a user who communicates with remote participants and shares digital information in front of the screen. There are many applications for tele-collaboration: Users "attend" Web conferences with a company partner far away, using video conference or data conference systems; teachers and students look at the same data and study together in their own home or office; and so on. In recent years, tele-medicine enables a distant doctor to provide medical treatment to a patient in his or her own home. These tele-collaboration technologies can overcome the spatial limitations of traditional collaboration, because it can allow collaborators to transcend space. Moreover, tele-collaboration requires no expenditures of time and money for attending meetings. However this experience of remote collaboration is mainly possible within a spatially limited environment where users do not move. For example, in a 2-D display environment, users in fixed location can write and draw together with a remote participant or show marker based-ar information using a monitor or a large display [1-3]. In 2.5-D display environment, users can collaborate in a limited 3-D environment using a curved screen that has been installed in a fixed location [4]. On the other hand, multiple cameras or depth cameras are used in a 3-D display environment [5-7], but tele-collaborations studied are still possible in front of a fixed

2 camera or screen. So we need a noble method that ensures user's mobility and allows perfect 3-D interaction. This paper suggests an augmented reality(ar) based DigiLog Space Generator. DigiLog Space Generator can generate 3-D collaboration space simply in an arbitrary and common environment. DigiLog Space is a combination of the physical world and a mirror world (digitization of the real world) [8]. The real world and mirror world are connected bidirectionally. In DigiLog Space, information is shared in real-time. DigiLog Space Generator generates these DigiLog Spaces and allows telecollaboration through converging DigiLog Spaces. That is, the DigiLog Space Generator is composed of two parts. One is the technology of generating DigiLog Space; the other is the technology of combining several DigiLog Spaces. The overall concept and procedure is illustrated in Figure 1. When we generate space in the real world, we also generate a mirror world. A mirror world is not physical space, but virtual space. A mirror world does not have a physical scale, and thus uses an arbitrary scale. If the scale of a mirror world and a real world are different, remote space combines with the mirror world unnaturally, because remote space and virtual information in the space are bigger or smaller than the real things. To solve this problem, we calculated the scale of generated space and saved the scale. The user combines their own space and the remote space realistically using the physical scale. We experimented and discussed the accuracy of generated space. The remainder of this paper is organized as follows: Section 2 explains the system design of the DigiLog Space Generator. Section 3 deals with the implementation details and experimental results of in situ space modeling and combination. Finally, the conclusion and future directions of the DigiLog space generator are summarized in Section 4. Figure 1: DigiLog Space Generation and combination. 2 DigiLog Space Generator Figure 2 is a block diagram of a total system. It is composed of DigiLog space generation, DigiLog Space Combination, and a DigiLog Space Management Module. The DigiLog Space Generation Module reconstructs a space in real time through

3 images from an HMD camera and user input[9]. Then it gains a 3D feature map. Next, we model an object of interest (OoI) as a reference. The modeled object has local coordinates and a scale. Then a plane is made and extruded through user input. When the system generates space using the 3D feature map, we can track the space. If space is generated, the space Id, coordinates, scale and pose are stored at Digilog Space Management. Then the system performs a combination step for bring remote space. When a reference object is detected, the system makes a request for space sharing to DigiLog Space Management. Then the system can bring remote space at DB, and combine multiple spaces based on the reference object. Finally, the user can see the AR information and communicate with a remote partner. Figure 2: Overall procedure of DigiLog Space Generator. We generate a partial space of interest according to the need to ensure the user s mobility and share information efficiently. Because the generated space contains data for camera tracking, the HMD camera can be tracked at the space. Virtual information that we want to share can be registered in the space accurately at the HMD camera. Generally, conventional methods of space generation are conducted offline because it takes too much time to reconstruct space. These offline methods of space reconstruction are separated from the step of information sharing. Users can share information only at a fixed space that is already made. To define DigiLog space as an arbitrary environment, we propose a method of simple space modeling in real time for DigiLog Space generation. Our system captures environmental images using an HMD camera and reconstructs the environment with a 3D feature map based on the structure-from-motion (SFM) method. A DigiLog Space is then defined by a user in the physical environment. We model only interested partial space for real time and share information selectively. Information outside the space can be hidden using an occlusion effect by generating partial space. We can define a space range that we want to share.

4 The modeling of an object of interest (OoI) is an important cue to combine remote space and bring augmented information. First we set the reference coordinates on the OoI, because the reference coordinates of space are needed to bring in the remote shared space and AR information. When the user selects a bottom-line reference object, reference coordinates are made at the bottom left corner of an OoI by refining coordinates. Therefore, the OoI is modeled and tracked separately based on the generated DigiLog space, with its own coordinate system. For this, multiple featurevocabulary trees are managed for the space and the objects[10]. If we input the physical scale of the OoI, our system calculates the scale of a generated space using scale of OoI. We know the physical scale of the mirror world and save that scale at DB. We assume that the user and partner use common OoI as reference objects. The modeled, common OoI has the same feature map, scale, and coordinates, so we can combine difference spaces suitably. In other words, multiple DigiLog spaces are synchronized through connected coordinates. Therefore, sharing of AR information in real time is possible in a combined space. The whole scenario of the DigiLog space generator is shown in Figure 3. User A, wearing video see-through HMD, generates his own space and models OoI in situ with the HMD camera and a handheld input device. Input from user A in the real world makes the mirror world A at the same time. So user A, wearing HMD, can see the real world with the mirror world. Then user B combines her DigiLog space with remote space A based on a modeled dragon poster as a reference object created by the DigiLog Space Generator. User B, wearing HMD, can see both the virtual information from herself and from user A. Therefore user B can share 3D information and live communication in the 3D environment while walking. But there is a limitation in that the reference object is always seen in the user s view. 3 Experiment and Implementation To verify the performance of the DigiLog Space Generator, we measured the accuracy of generated space. We compare scale of physical space and generated space(mirror world) for measuring accuracy. Minimizing the error between of spaces is important to combine realistically. In our experiment, we set the partial space for the experiment as shown in Figure 4. The yellow box shape indicates our partial space. To make the space, the user generates coordinates at a reference object (the poster) through input. Next, the user touches four corners of the front wall to generate a plane. Finally, the plane is extruded by user interaction. Then scale of generated space is calculated and stored at DB. The user conducts the generating-space experiment 10 times. The scale of real space is 180 (x axis) x 170 (y axis) x 90 (z axis) cm. In order to generate a spatially stable space, a reference image must show all of the HMD camera view. Therefore, the user creates a space at a spot that is 4 meters away from the reference image. For stable tracking, the user only has to move the translation. The experiments were performed in a typical indoor environment. A video see-through

5 HMD was used with pixel resolution. A camera on the HMD captured 30 image frames per second. Our system was implemented using 1 OpenCV and 2 OSG. Figure 3: A whole scenario of the DigiLog Space Generator: (a) In-situ DigiLog Space Generation: 3D features extraction and space extrusion; (b) In-situ DigiLog Space Combination: a user brings remote shared space and AR information. Optionally in order to view the remote partner, a depth camera on a stand was used to detect and segment a person. 1 Open computer vision library, 2 Open scene graph library,

6 Figure 4: Experiment environment and steps of generating space. Figure 5(a) shows real space (a gray box) and corner points (blue points). Corner points are made by user input. Blue points mean user input. So, if the difference between corner points and box corner points is small, the generated space is similar to real space. Our generated space is pretty accurate because the blue points are crowded around the box corners. When analyzed the average of x, y, and z values, and the average errors of x and y were under 5 cm. However, the average error of z is relatively large at 22.3cm. This points to the limitation of our system. Maybe user didn t know the point we set at the z axis because the depth value of real space is ambiguous. Also, when the user makes a plane standing at the front only, it is generated aslant because 3D points are created inaccurately. For accuracy, the user should make the plane by moving at a variety of angles. Figure 5: Experiment result: (a) Corner points of the space created by the user. The scale of the target space is 180 (x-axis) x 170 (y-axis) x 90 (z-axis) cm; (b) Distance errors between two spaces in each axis. Fig. 6 shows an implementation of DigiLog space combination. A user combines his or her space with remote space based on modeled objects. This combined DigiLog space enables sharing 3-D information and live communication.

7 AR information HMD Remote partner DigiLog space generation Collaboration Figure 6: A user exploits a video see-through HMD for modeling, tracking objects of interest, and sharing DigiLog space. And user can collaborate with remote partner with AR information in combined space. 4 Conclusion In this paper, we introduce a novel AR-based tele-collaboration technology. Our method is more effective than a conventional, immovable video-conferencing system using cumbersome equipment and installations. We can generate interested space and combine space in real time. Our system can be used at home, or the classroom, and in other installations. For accurate combination, we use a common reference object. The reference object is important for knowing coordinates and scale of space. So our system can synchronize multiple DigiLog Spaces easily with coordinates and scale information. Currently there are limitations for implementation. Our system does not work well in environments that have repetitive textures, or no textures. And generated space and virtual information are unstable if the OoI is occluded. An ambiguous scale of the z axis is also problematic. For future work we plan to use an RGB-D feature map, including color and depth information, for modeling and tracking. This will make generated space more stable under featureless and variant lighting environments, rather than using RGB feature points only. And if we use a depth value when generating space, the user will have a z scale, obviously. Once our system is updated it can be applicable to many AR applications including experimental education, urban planning, military simulation, collaborative surgery, etc. Acknowledgements This work was supported by the Global Frontier R&D Program on <Human-centered Interaction for Coexistence> funded by the National Research Foundation of Korea grant funded by the Korean Government(MEST) Reference [1] M. Kuechler, and A. M. Kunz, Collaboard: A remote collaboration groupware device featuring an embodiment-enriched shared workspace, 16th ACM international conference on Supporting group work, pp , [2] J. Tang, J. Marlow, A. Hoff, A. Roseway, K. Inkpen, C. Zhao, and X. Cao, Time Travel Proxy: Using Lightweight Video Recordings to Create Asynchronous,

8 Interactive Meetings, SIGCHI Conference on Human Factors in Computing Systems, pp , [3] I. Barakonyi, T. Fahmy, and D. Schmalstieg, Remote collaboration using augmented reality video conferencing, Graphics Interface 2004, pp 89 96, [4] H. Benko, R. Jota, and A. Wilson, MirageTable: Freehand Interaction on a Projected Augmented Reality Tabletop, SIGCHI Conference on Human Factors in Computing Systems, pp , [5] O. Schreer, I. Feldmann, N. Atzpadin, P. Eisert, P. Kauff, and H. Belt, "3DPresence - A system concept for multi-user and multi-party immersive 3D videoconferencing, CVMP, pp , [6] K. Kim, J. Bolton, A. Girouard, J. Cooperstock, and R. Vertegaal, TeleHuman: Effects of 3D perspective on gaze and pose estimation with a life-size cylindrical telepresence pod, SIGCHI Conference on Human Factors in Computing Systems, pp , [7] N. Lehment, K. Erhardt, and G. Rigoll, Interface Design for an Inexpensive Hands-Free Collaborative Videoconferencing System, ISMAR, 2012 [8] T. Ha, H. Lee, W. Woo, DigiLog Space: Real-Time Dual Space Registration and Dynamic Information Visualization for 4D+ Augmented Reality, ISUVR, pp , 2012 [9] T. Ha and W. Woo, "ARWand for an Augmented World Builder," IEEE 3DUI, 2013 (in press) [10] K. Kim, V. Lepetitb and W. Woo, Real-time interactive modeling and scalable multiple object tracking for AR, Computers & Graphics, Volume 36, Issue 8, 2012, pp , 2012

Context-aware Library Management System using Augmented Reality

Context-aware Library Management System using Augmented Reality International Journal of Electronic and Electrical Engineering. ISSN 0974-2174 Volume 7, Number 9 (2014), pp. 923-929 International Research Publication House http://www.irphouse.com Context-aware Library

More information

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com

More information

One-Way Pseudo Transparent Display

One-Way Pseudo Transparent Display One-Way Pseudo Transparent Display Andy Wu GVU Center Georgia Institute of Technology TSRB, 85 5th St. NW Atlanta, GA 30332 andywu@gatech.edu Ali Mazalek GVU Center Georgia Institute of Technology TSRB,

More information

A MOBILE SERVICE ORIENTED MULTIPLE OBJECT TRACKING AUGMENTED REALITY ARCHITECTURE FOR EDUCATION AND LEARNING EXPERIENCES

A MOBILE SERVICE ORIENTED MULTIPLE OBJECT TRACKING AUGMENTED REALITY ARCHITECTURE FOR EDUCATION AND LEARNING EXPERIENCES A MOBILE SERVICE ORIENTED MULTIPLE OBJECT TRACKING AUGMENTED REALITY ARCHITECTURE FOR EDUCATION AND LEARNING EXPERIENCES Sasithorn Rattanarungrot, Martin White and Paul Newbury University of Sussex ABSTRACT

More information

Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone

Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone Young Jae Lee Dept. of Multimedia, Jeonju University #45, Backma-Gil, Wansan-Gu,Jeonju, Jeonbul, 560-759,

More information

VIRTUAL TRIAL ROOM USING AUGMENTED REALITY

VIRTUAL TRIAL ROOM USING AUGMENTED REALITY VIRTUAL TRIAL ROOM USING AUGMENTED REALITY Shreya Kamani, Neel Vasa, Kriti Srivastava, D. J. Sanghvi College of Engineering, Mumbai 53 Abstract This paper presents a Virtual Trial Room application using

More information

3D U ser I t er aces and Augmented Reality

3D U ser I t er aces and Augmented Reality 3D User Interfaces and Augmented Reality Applications Mechanical CAD 3D Animation Virtual Environments Scientific Visualization Mechanical CAD Component design Assembly testingti Mechanical properties

More information

A Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology

A Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology , pp.411-418 http://dx.doi.org/10.14257/ijca.2014.7.10.38 A Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology Jung-Yoon Kim 1 and SangHun Nam 2 1 Graduate School

More information

A method of generating free-route walk-through animation using vehicle-borne video image

A method of generating free-route walk-through animation using vehicle-borne video image A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1

More information

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea sungmoj@gmail.com,

More information

Interactive Cards A game system in Augmented Reality

Interactive Cards A game system in Augmented Reality Interactive Cards A game system in Augmented Reality João Alexandre Coelho Ferreira, Instituto Superior Técnico Abstract: Augmented Reality can be used on innumerous topics, but the point of this work

More information

Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool

Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool Joseph C. Tsai Foundation of Computer Science Lab. The University of Aizu Fukushima-ken, Japan jctsai@u-aizu.ac.jp Abstract Video

More information

Effective Use of Android Sensors Based on Visualization of Sensor Information

Effective Use of Android Sensors Based on Visualization of Sensor Information , pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,

More information

Off-line programming of industrial robots using co-located environments

Off-line programming of industrial robots using co-located environments ISBN 978-1-84626-xxx-x Proceedings of 2011 International Conference on Optimization of the Robots and Manipulators (OPTIROB 2011) Sinaia, Romania, 26-28 Mai, 2011, pp. xxx-xxx Off-line programming of industrial

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality

BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality Henry Fuchs University of North Carolina at Chapel Hill (USA) and NSF Science and Technology Center for Computer Graphics and

More information

How To Use Eye Tracking With A Dual Eye Tracking System In A Collaborative Collaborative Eye Tracking (Duet)

How To Use Eye Tracking With A Dual Eye Tracking System In A Collaborative Collaborative Eye Tracking (Duet) Framework for colocated synchronous dual eye tracking Craig Hennessey Department of Electrical and Computer Engineering University of British Columbia Mirametrix Research craigah@ece.ubc.ca Abstract Dual

More information

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg

More information

Speed Performance Improvement of Vehicle Blob Tracking System

Speed Performance Improvement of Vehicle Blob Tracking System Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed

More information

Telepresence systems for Large Interactive Spaces

Telepresence systems for Large Interactive Spaces Telepresence systems for Large Interactive Spaces Cédric Fleury, Ignacio Avellino, Michel Beaudouin-Lafon, Wendy E. Mackay To cite this version: Cédric Fleury, Ignacio Avellino, Michel Beaudouin-Lafon,

More information

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control 보안공학연구논문지 (Journal of Security Engineering), 제 8권 제 3호 2011년 6월 Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control Ji-Hoon Lim 1), Seoksoo Kim 2) Abstract With

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

Interior Design in Augmented Reality Environment

Interior Design in Augmented Reality Environment Interior Design in Augmented Reality Environment Viet Toan Phan Ph. D Candidate 1 School of Architecture & Civil engineering Kyungpook National University, Republic of Korea 2 Department of Industrial

More information

Figure1. Acoustic feedback in packet based video conferencing system

Figure1. Acoustic feedback in packet based video conferencing system Real-Time Howling Detection for Hands-Free Video Conferencing System Mi Suk Lee and Do Young Kim Future Internet Research Department ETRI, Daejeon, Korea {lms, dyk}@etri.re.kr Abstract: This paper presents

More information

Feasibility of an Augmented Reality-Based Approach to Driving Simulation

Feasibility of an Augmented Reality-Based Approach to Driving Simulation Liberty Mutual Research Institute for Safety Feasibility of an Augmented Reality-Based Approach to Driving Simulation Matthias Roetting (LMRIS) Thomas B. Sheridan (MIT AgeLab) International Symposium New

More information

VIRTUE The step towards immersive telepresence in virtual video-conference systems

VIRTUE The step towards immersive telepresence in virtual video-conference systems VIRTUE The step towards immersive telepresence in virtual video-conference systems Oliver SCHREER (HHI) 1 and Phil SHEPPARD (British Telecom) 2 1 Heinrich-Hertz-Institut, Einsteinufer 37, D-10587 Berlin,

More information

Constructing Virtual 3D Models with Physical Building Blocks

Constructing Virtual 3D Models with Physical Building Blocks Constructing Virtual 3D Models with Physical Building Blocks Ricardo Jota VIMMI / Inesc-ID IST / Technical University of Lisbon 1000-029 Lisbon, Portugal jotacosta@ist.utl.pt Hrvoje Benko Microsoft Research

More information

Virtual Environments - Basics -

Virtual Environments - Basics - Virtual Environments - Basics - What Is Virtual Reality? A Web-Based Introduction Version 4 Draft 1, September, 1998 Jerry Isdale http://www.isdale.com/jerry/vr/whatisvr.html Virtual Environments allow

More information

A Prototype For Eye-Gaze Corrected

A Prototype For Eye-Gaze Corrected A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive

More information

Situated Visualization with Augmented Reality. Augmented Reality

Situated Visualization with Augmented Reality. Augmented Reality , Austria 1 Augmented Reality Overlay computer graphics on real world Example application areas Tourist navigation Underground infrastructure Maintenance Games Simplify Qualcomm Vuforia [Wagner ISMAR 2008]

More information

Virtual Fitting by Single-shot Body Shape Estimation

Virtual Fitting by Single-shot Body Shape Estimation Virtual Fitting by Single-shot Body Shape Estimation Masahiro Sekine* 1 Kaoru Sugita 1 Frank Perbet 2 Björn Stenger 2 Masashi Nishiyama 1 1 Corporate Research & Development Center, Toshiba Corporation,

More information

The Study on the Graphic Design of Media art: Focusing on Projection Mapping

The Study on the Graphic Design of Media art: Focusing on Projection Mapping , pp.14-18 http://dx.doi.org/10.14257/astl.2015.113.04 The Study on the Graphic Design of Media art: Focusing on Projection Mapping Jihun Lee 1, Hyunggi Kim 1 1 Graduate School of Advanced Imaging Science,

More information

Wii Remote Calibration Using the Sensor Bar

Wii Remote Calibration Using the Sensor Bar Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr

More information

A Cognitive Approach to Vision for a Mobile Robot

A Cognitive Approach to Vision for a Mobile Robot A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 benjamin@pace.edu Damian Lyons Fordham University,

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,

More information

Intuitive Navigation in an Enormous Virtual Environment

Intuitive Navigation in an Enormous Virtual Environment / International Conference on Artificial Reality and Tele-Existence 98 Intuitive Navigation in an Enormous Virtual Environment Yoshifumi Kitamura Shinji Fukatsu Toshihiro Masaki Fumio Kishino Graduate

More information

Beyond Built-in: Why a Better Webcam Matters

Beyond Built-in: Why a Better Webcam Matters Whitepaper: Beyond Built-in: Why a Better Webcam Matters How to Uplevel Your Ability to Connect, Communicate and Collaborate Using Your Laptop or PC Introduction The ability to virtually communicate and

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

Tutorial. Making Augmented Reality Accessible for Everyone. Copyright (c) 2010 Human Interface Technology Laboratory New Zealand

Tutorial. Making Augmented Reality Accessible for Everyone. Copyright (c) 2010 Human Interface Technology Laboratory New Zealand Tutorial Making Augmented Reality Accessible for Everyone Copyright (c) 2010 Human Interface Technology Laboratory New Zealand Table of Contents ABOUT BuildAR... 3 BuildAR TUTORIAL... 4-13 Part 1 Installation..4

More information

What do these have in common?

What do these have in common? What do these have in common? Telepresence Telepresence is like being there, only better Videoconferencing on steroids has potential to lessen your carbon footprint and your frequent flier miles. http://www.mnn.com/money/sustainable-business-practices/stories/telepresence-is-like-being-there-only-better

More information

Indoor Surveillance System Using Android Platform

Indoor Surveillance System Using Android Platform Indoor Surveillance System Using Android Platform 1 Mandar Bhamare, 2 Sushil Dubey, 3 Praharsh Fulzele, 4 Rupali Deshmukh, 5 Dr. Shashi Dugad 1,2,3,4,5 Department of Computer Engineering, Fr. Conceicao

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes 1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky klitomis@cs.ucr.edu Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research

More information

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract ACTA PHYSICA DEBRECINA XLVI, 143 (2012) DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE F. R. Soha, I. A. Szabó, M. Budai University of Debrecen, Department of Solid State Physics Abstract

More information

An Instructional Aid System for Driving Schools Based on Visual Simulation

An Instructional Aid System for Driving Schools Based on Visual Simulation An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez

More information

Basic Understandings

Basic Understandings Activity: TEKS: Exploring Transformations Basic understandings. (5) Tools for geometric thinking. Techniques for working with spatial figures and their properties are essential to understanding underlying

More information

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques Alexander Bornik 1, Reinhard Beichel 1, Bernhard Reitinger 1, Georg Gotschuli 2, Erich Sorantin 2, Franz Leberl 1 and Milan Sonka

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

2D & 3D TelePresence

2D & 3D TelePresence 2D & 3D TelePresence delivers the ultimate experience in communication over a distance with aligned eye contact and a life-size sense of presence in a room setting. Eye Contact systems achieve eye-to-eye

More information

Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model

Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model Kohei Arai 1 Graduate School of Science and Engineering Saga University Saga City, Japan Mariko Oda

More information

Experiments with a Camera-Based Human-Computer Interface System

Experiments with a Camera-Based Human-Computer Interface System Experiments with a Camera-Based Human-Computer Interface System Robyn Cloud*, Margrit Betke**, and James Gips*** * Computer Science Department, Boston University, 111 Cummington Street, Boston, MA 02215,

More information

Teaching Methodology for 3D Animation

Teaching Methodology for 3D Animation Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic

More information

HDTV: A challenge to traditional video conferencing?

HDTV: A challenge to traditional video conferencing? HDTV: A challenge to traditional video conferencing? Gloria Mark 1 and Paul DeFlorio 2 University of California, Irvine 1 and Jet Propulsion Lab, California Institute of Technology 2 gmark@ics.uci.edu,

More information

INTRODUCTION TO RENDERING TECHNIQUES

INTRODUCTION TO RENDERING TECHNIQUES INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature

More information

Eye Contact in Leisure Video Conferencing. Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway.

Eye Contact in Leisure Video Conferencing. Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway. Eye Contact in Leisure Video Conferencing Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway 19 November 2012 Abstract This paper presents systems which enable eye contact in leisure

More information

The Visualization Simulation of Remote-Sensing Satellite System

The Visualization Simulation of Remote-Sensing Satellite System The Visualization Simulation of Remote-Sensing Satellite System Deng Fei, Chu YanLai, Zhang Peng, Feng Chen, Liang JingYong School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079,

More information

Feasibility Study of Searchable Image Encryption System of Streaming Service based on Cloud Computing Environment

Feasibility Study of Searchable Image Encryption System of Streaming Service based on Cloud Computing Environment Feasibility Study of Searchable Image Encryption System of Streaming Service based on Cloud Computing Environment JongGeun Jeong, ByungRae Cha, and Jongwon Kim Abstract In this paper, we sketch the idea

More information

A framework for Itinerary Personalization in Cultural Tourism of Smart Cities

A framework for Itinerary Personalization in Cultural Tourism of Smart Cities A framework for Itinerary Personalization in Cultural Tourism of Smart Cities Gianpaolo D Amico, Simone Ercoli, and Alberto Del Bimbo University of Florence, Media Integration and Communication Center

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

Paper Airplanes & Scientific Methods

Paper Airplanes & Scientific Methods Paper Airplanes 1 Name Paper Airplanes & Scientific Methods Scientific Inquiry refers to the many different ways in which scientists investigate the world. Scientific investigations are done to answer

More information

Go to contents 18 3D Visualization of Building Services in Virtual Environment

Go to contents 18 3D Visualization of Building Services in Virtual Environment 3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer

More information

Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University

Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents

More information

EXPLORING THE USE OF 360 DEGREE CURVILINEAR DISPLAYS FOR THE PRESENTATION OF 3D INFORMATION

EXPLORING THE USE OF 360 DEGREE CURVILINEAR DISPLAYS FOR THE PRESENTATION OF 3D INFORMATION EXPLORING THE USE OF 360 DEGREE CURVILINEAR DISPLAYS FOR THE PRESENTATION OF 3D INFORMATION by JOHN ANDREW BOLTON A thesis submitted to the School of Computing in conformity with the requirements for the

More information

Implementation of Augmented Reality System for Smartphone Advertisements

Implementation of Augmented Reality System for Smartphone Advertisements , pp.385-392 http://dx.doi.org/10.14257/ijmue.2014.9.2.39 Implementation of Augmented Reality System for Smartphone Advertisements Young-geun Kim and Won-jung Kim Department of Computer Science Sunchon

More information

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving 3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

Visualization and Simulation for Research and Collaboration. An AVI-SPL Tech Paper. www.avispl.com (+01).866.559.8197

Visualization and Simulation for Research and Collaboration. An AVI-SPL Tech Paper. www.avispl.com (+01).866.559.8197 Visualization and Simulation for Research and Collaboration An AVI-SPL Tech Paper www.avispl.com (+01).866.559.8197 1 Tech Paper: Visualization and Simulation for Research and Collaboration (+01).866.559.8197

More information

Service-Oriented Visualization of Virtual 3D City Models

Service-Oriented Visualization of Virtual 3D City Models Service-Oriented Visualization of Virtual 3D City Models Authors: Jan Klimke, Jürgen Döllner Computer Graphics Systems Division Hasso-Plattner-Institut, University of Potsdam, Germany http://www.hpi3d.de

More information

Aero-Marker: Blimp-based Augmented Reality Marker

Aero-Marker: Blimp-based Augmented Reality Marker Aero-Marker: Blimp-based Augmented Reality Marker Hiroaki Tobita Sony CSL Pairs 6 Rue Amiot 75005 Paris France tobby@csl.sony.fr Abstract We describe an augmented reality (AR) marker that has both a virtual

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

A Method of Caption Detection in News Video

A Method of Caption Detection in News Video 3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.

More information

Immersive 3-D Video Conferencing: Challenges, Concepts, and Implementations

Immersive 3-D Video Conferencing: Challenges, Concepts, and Implementations Proc. SPIE Visual Communications and Image Processing (VCIP), Lugano, Switzerland, July 2003. Immersive 3-D Video Conferencing: Challenges, Concepts, and Implementations Peter Eisert Image Processing Department

More information

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 S. Heymann, A. Smolic, K. Mueller, Y. Guo, J. Rurainsky, P. Eisert, T. Wiegand Fraunhofer Institute

More information

CREATE A 3D MOVIE IN DIRECTOR

CREATE A 3D MOVIE IN DIRECTOR CREATE A 3D MOVIE IN DIRECTOR 2 Building Your First 3D Movie in Director Welcome to the 3D tutorial for Adobe Director. Director includes the option to create three-dimensional (3D) images, text, and animations.

More information

AR Interfaces and Interaction

AR Interfaces and Interaction Virtual Object Manipulation on a Table-Top AR Environment H. Kato 1, M. Billinghurst 2, I. Poupyrev 3, K. Imamoto 1, K. Tachibana 1 1 Faculty of Information Sciences, Hiroshima City University 3-4-1, Ozuka-higashi,

More information

Mobile Application of Interactive Remote Toys with Augmented Reality

Mobile Application of Interactive Remote Toys with Augmented Reality Mobile Application of Interactive Remote Toys with Augmented Reality Chi-Fu Lin *, Pai-Shan Pa, and Chiou-Shann Fuh * * Deaprtment of Computer Science and Information Engineering, National Taiwan University

More information

Lighting Estimation in Indoor Environments from Low-Quality Images

Lighting Estimation in Indoor Environments from Low-Quality Images Lighting Estimation in Indoor Environments from Low-Quality Images Natalia Neverova, Damien Muselet, Alain Trémeau Laboratoire Hubert Curien UMR CNRS 5516, University Jean Monnet, Rue du Professeur Benoît

More information

The Use of HemoSpat To Include Bloodstains Located on Nonorthogonal Surfaces in Area-of-Origin Calculations

The Use of HemoSpat To Include Bloodstains Located on Nonorthogonal Surfaces in Area-of-Origin Calculations Technical Note The Use of HemoSpat To Include Bloodstains Located on Nonorthogonal Surfaces in Area-of-Origin Calculations Kevin Maloney 1 Jim Killeen 1 Andy Maloney 2 Abstract: Determining the origin

More information

CARDA: Content Management Systems for Augmented Reality with Dynamic Annotation

CARDA: Content Management Systems for Augmented Reality with Dynamic Annotation , pp.62-67 http://dx.doi.org/10.14257/astl.2015.90.14 CARDA: Content Management Systems for Augmented Reality with Dynamic Annotation Byeong Jeong Kim 1 and Seop Hyeong Park 1 1 Department of Electronic

More information

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Samuel Kosolapov Department of Electrical Engineering Braude Academic College of Engineering Karmiel 21982, Israel e-mail: ksamuel@braude.ac.il

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

Technology in the Classroom

Technology in the Classroom 1 Understanding Technology Integration In The Classroom Understand Integrating Technology In The Classroom Looks at different ways to interpret what is meant by integrating technology in the Gives examples

More information

Virtuelle Realität. Overview. Teil 5: Visual Displays. Prof. Bernhard Jung. Virtuelle Realität

Virtuelle Realität. Overview. Teil 5: Visual Displays. Prof. Bernhard Jung. Virtuelle Realität Teil 5: Visual Displays Virtuelle Realität Wintersemester 2007/08 Prof. Bernhard Jung Overview Stereoscopy and other visual depth cues Properties of visual displays visual presentation qualities logistic

More information

Video Conferencing System Buyer s Guide

Video Conferencing System Buyer s Guide Video Conferencing System Buyer s Guide Video conferencing is the greatest advancement in business communications technology since the dawning of the mobile phone. Video enables users to connect face-to-face

More information

Mobile Application Design of Augmented Reality Digital Pet

Mobile Application Design of Augmented Reality Digital Pet Mobile Application Design of Augmented Reality Digital Pet 1 Chi-Fu Lin, 2 Sheng-Wen Lo, 2 Pai-Shan Pa, and 1 Chiou-Shann Fuh 1 Deaprtment of Computer Science and Information Engineering, National Taiwan

More information

Web-Based Enterprise Data Visualization a 3D Approach. Oleg Kachirski, Black and Veatch

Web-Based Enterprise Data Visualization a 3D Approach. Oleg Kachirski, Black and Veatch Web-Based Enterprise Data Visualization a 3D Approach Oleg Kachirski, Black and Veatch Contents - Introduction - Why 3D? - Applications of 3D - 3D Content Authoring - 3D/4D in GIS - Challenges of Presenting

More information

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,

More information

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process Amine Chellali, Frederic Jourdan, Cédric Dumas To cite this version: Amine Chellali, Frederic Jourdan, Cédric Dumas.

More information

Multivariate data visualization using shadow

Multivariate data visualization using shadow Proceedings of the IIEEJ Ima and Visual Computing Wor Kuching, Malaysia, Novembe Multivariate data visualization using shadow Zhongxiang ZHENG Suguru SAITO Tokyo Institute of Technology ABSTRACT When visualizing

More information

XPULT INSTRUCTIONS BASIC VERSION

XPULT INSTRUCTIONS BASIC VERSION XPULT INSTRUCTIONS BASIC VERSION The Xpult is a device for launching table tennis balls or other light plastic balls. Most likely, you will have received the Xpult from your teacher or somebody else who

More information

Video Conferencing Display System Sizing and Location

Video Conferencing Display System Sizing and Location Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed

More information

Augmented Reality Videoconferencing for Collaborative Work

Augmented Reality Videoconferencing for Collaborative Work Augmented Reality Videoconferencing for Collaborative Work István Barakonyi bara@ims.tuwien.ac.at Werner Frieb Vienna University of Technology Favoritenstrasse 9-11/188/2 Vienna, A-1040, Austria werner.frieb@gmx.net

More information

WHITE PAPER Personal Telepresence: The Next Generation of Video Communication. www.vidyo.com 1.866.99.VIDYO

WHITE PAPER Personal Telepresence: The Next Generation of Video Communication. www.vidyo.com 1.866.99.VIDYO WHITE PAPER Personal Telepresence: The Next Generation of Video Communication www.vidyo.com 1.866.99.VIDYO 2009 Vidyo, Inc. All rights reserved. Vidyo is a registered trademark and VidyoConferencing, VidyoDesktop,

More information

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks Paul.banks@tetravue.com

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks Paul.banks@tetravue.com High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks Paul.banks@tetravue.com Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D

More information

Interactive Motion Simulators

Interactive Motion Simulators motionsimulator motionsimulator About the Company Since its founding in 2003, the company Buck Engineering & Consulting GmbH (BEC), with registered offices in Reutlingen (Germany) has undergone a continuous

More information

Monitoring Head/Eye Motion for Driver Alertness with One Camera

Monitoring Head/Eye Motion for Driver Alertness with One Camera Monitoring Head/Eye Motion for Driver Alertness with One Camera Paul Smith, Mubarak Shah, and N. da Vitoria Lobo Computer Science, University of Central Florida, Orlando, FL 32816 rps43158,shah,niels @cs.ucf.edu

More information

Development of Educational System for Automotive Engineering based on Augmented Reality

Development of Educational System for Automotive Engineering based on Augmented Reality Development of Educational System for Automotive Engineering based on Augmented Reality Ildar Farkhatdinov 1, Jee-Hwan Ryu 2 1,2 School of Mechanical Engineering, Korea University of Technology and Education,

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information