Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool
|
|
- Eugenia Nichols
- 8 years ago
- Views:
Transcription
1 Cloud-Empowered Multimedia Service: An Automatic Video Storytelling Tool Joseph C. Tsai Foundation of Computer Science Lab. The University of Aizu Fukushima-ken, Japan Abstract Video storytelling has become a popular technology to let users design and plan their films. As the development of multimedia devices and network, more and more people will share their productions on the Internet. This approach is not only used in movie industry, but e-learning, digital archive and art. With the development of cloud computing, it can provide various computing and storage services over the Internet. Instead of using 3-D model generation system, we proposed a system composed of several video technologies to let teachers plan materials with existing avatars and scene from the cloud data base in this paper. Users can design the background by cloud panorama generation at first. Then, drug a trajectory line on the scene. The authoring tool will select an avatar from the database and make it with several kinds of behaviors. Users can select several behaviors during the motion trajectory; our tool will search the most suitable avatar and insert it into the panorama. This tool equips with special functions to integrate different motion tracks for the generation of video narratives is also presented in this paper. Keywords- Motion interpolation; Video Planning; Panorama Generation; Layer Segmentation; Cloud Computing I. INTRODUCTION (HEADING 1) As the interactive media has become pervasive and popular in our life, more and more researchers focus on this research field. Follow to this situation, author-ing tools are widely applied in various applications. For instance, users can em-ploy video editing software to process video retouching for generating a new film. Therefore, there are many researcher combine video authoring tool with education to generate varied teaching materials. How to attract students eyes and make courses interesting is the most important goal of teachers. Although there are many learning systems proposed, the operation of most of tools is not easy to use. Easy-to-use is the most important to every authoring tool. According to statistics, there are more and more teachers will choose the tools to generate their courses. Indeed, to apply accessorial teaching materials, such as videos, photos or games, will help to improve the effect of teaching. Cloud computing has become the most popular research field in recent years. As its development, people can store every kind of data on the cloud through the Internet. The cloud server provides hardware devices and software services as the data-center. By this way, users just need to prepare a PC with the network that the cloud services can be used. Therefore, it s another new research field to combine multimedia services and cloud computing, called cloud media. To provide an ef-fective, flexible and scalable data processing is the main goal of the ` Neil Y. Yen School of Computer Science and Engineerinig The University of Aizu Fukushima-ken, Japan neilyyen@u-aizu.ac.jp cloud multi-media. It also solves today s users requirements in high quality multimedia and applications. In other words, as the development of network and ubiquitous devices, such as smart phone, people attach importance to the effect of the multimedia. Through providing multimedia services by cloud, users can enjoy the services anywhere without expensive devices. Therefore, we would like to propose a storytelling tool via cloud computing and storage for users generating a new film in this paper. Users just need to choose a panoramic figure or update a video for generating a background, then decide a trajectory on the frame. Several behaviors are provided to generate avatars with special motions, the system will search the best fit avatars from the cloud database and insert into the panoramic figure. Interactive storytelling [1] can use tracking method to extract an avatar from the real video and combine it with a virtual reality model. In this paper [1], authors proposed an artificial intelligence system to let user interact with the virtual avatar. However, to extract a precise object from a video is a critical step to make the generating video much real. In [2], Khan et al. proposed a novel object tracking algorithm for processing complex cases in different scenarios. The algorithm employs particle filters and multi-mode anisotropic mean shift. It improves previous methods to make the tracker can detect the target object more accurately. Even if the target is occluded with some other objects, the tracking results still maintains a tracking box on the target object precisely. The method uses an online learning method to make the result of tracking not influenced by noisy shape changes. To process the multimedia service through the internet needs to consider the performance and quality. In [8], Zhu et al. proposed mul-timedia-aware cloud which can show how a cloud performs distributed multimedia processing and storage. Quality of service is another key problem has to consider in the cloud technology. A media-edge cloud (MEC) architecture is employed to solve the performance problem in cloud. In this paper, the main contribution is the generation of a multilayer video planning system based on cloud computing. It includes a new object tracking and extracting mechanism, a panorama generation and a spatiotemporal placement for deciding the avatar s location in the scene. A video clip is a spatiotemporal continuous list of frames. Each frame is com-posed by some regions. These regions can be treated as video layers. If video lay-ers can be manipulated properly, special effects of video (or forgery video) can be produced. It is a complicated procedure involves a series of challenge problems to produce video narratives. We summarize Copyright c 2010 Future Technology Research Association International 13
2 Volume 4, Number 3, September 2013 the procedure in several steps here with detailed solutions discussed in sections 2 and 3. Panorama Generation (Video scene): We use a frame referencing technique to find out the most similar patches from other frames to replace the area of fore-ground objects. After all frames are restored, the motion estimation will be used to combine all frames into a panorama scene. For this panorama, users can control it on the system to design a narrative film. Behavior Clip Preprocessing: To decide a new video or storytelling, users have to plan the motion trajectory of object or avatar. We define several behaviors to let users select. For each behavior, the system will search on the cloud database for the best bit motion clips. A technique of object size ad-justment is adopted here for regulating size of objects at different layer/depth. Furthermore, in order to generate a video according to the video planning, an algorithm of motion interpolation/extrapolation of avatars [3] is used to control the motion speed of the avatar and maintain the video length of the avatar clips the same with the panorama scene video. Video Narrative Generation: The spatiotemporal placement of avatars (in-cluding Z distance) is calculated, with possible conflicts identified and resolved by the user. Video frames and layers are calibrated with the selected video scene in a video schematic. A layer merging mechanism is applied to generate the final video narrative. Cloud Data Management and Computation: Cloud database is suitable to store multimedia data. Since it is scalable and flexible, users can update an amount of metadata onto the database. Users can apply to not only their data but also others information. We use cloud computing into object extracting, panorama generation and video narrative generation. Figure 1. The system framework, users have to upload the videos or photos to the cloud server, and then use the system to design the video. Figure 2. The video planning example, there are two avatars selected and their trajectories are designed on the panorama frame. In Fig. 1, we illustrate the system framework. Users have to update their videos or avatars on the cloud database, all of the preprocessing such as avatar extraction, panorama generation and avatar motion generation, will be computed and processed through the cloud computing. A video planning framework is showed in Fig. 2. We let users design the avatars motion trajectories and the cloud computing system will compute the location of the avatar to adjust the size. In this figure, there are two avatars and orange and blue lines are their trajectories. The red frame with dotted line means the video is playing at that time. The system will refer to the trajectory and the length of the panorama to compute the correct avatar s location and its size. The first step processes the background video into a panorama. It will be discussed in section 2. We describe the remaining steps in section 3. Finally, the results and conclusion will be showed in section 4 and 5. The Fig. 3 shows the flowchart of our system. If users update a video with several 14 Copyright c 2010 Future Technology Research Association International
3 foregrounds, the object extracting method will be applied to generate avatars and update to the database in the step2. After this step, the video inpainting algorithm can be used to restore the damage area of the video. The inpainting results will be combined into a panorama. In step 4, an interface is provided to let users design the new film. It can load metadata from the cloud database and generate new film by cloud computing. We will explain all of the steps in detail in the following sections. Figure 3. The system flowchart. II. PANORAMA GENERATION The first challenge of this system is the background video generation. In order to let users operate the system conveniently, we propose a panorama generation algorithm to create a background figure for users designing the new video. To make sure the generating background video without any avatar, we use video inpainting [4] to remove all of the objects in the video. Before processing the inpainting algorithm, we have to mark or extract the existing foregrounds from the videos. A. Object Tracking and Extracting To extract the objects from video, we proposed an object tracking and extracting algorithm to segment the foreground from the background. The extracting foreground clips can be used as the new avatars, the processing of the avatar will be discussed in the next section. In our algorithm, the motion estimation, grab-cut and mean shift are utilized to track and segment the selected object from the video. Although there are many object tracking algorithms proposed, most of them are hard to map to cloud computing. Therefore, we would like to use several methods in this system At first, users have to select and mark the target object in the first frame of the video clips. To make sure the extracted avatar without any pixel of background, users need to mark the target object accurately. We adopt CDHS algorithm [5] to estimate the motion vector of each block on the contour of the target object. According to the motion vector of each block, we can compute a rough position of the target object in the next frame. A rough tracking window to the next frame be computed, the grab-cut [6] algorithm is employed to segment the foreground from the rough tracking result. At the beginning of this algorithm, we collect the information of the target object, includes color and structure. Therefore, the detail information can be checked and make the foreground label and background label much clear. In order to make a double check for the extracted result, the mean-shift algorithm [7] is proposed to process color segmentation or clustering, we use this method to recognize the foreground and background. The color information of masked object by users will be processed in mean-shift filter, we will get the 3D color matrix of the object. The color of result from grab-cut can be converted into CIELuv and compare with the result of the matrix. It will eliminate the pixels of background and a more accurate extracted result will be generated. B. Object Removal and Panorama Generation In this step, we combine video inpainting and motion estimation to finish the panorama generation. To restore the damage part of the video, the video inpainting algorithm [4] is employed to inpaint the video. It can maintain the time continuity of the video, therefore the removal part will be smoother after inpainting. The panorama generation is based on the results of video inpainting, we use the inpainting results as the reference frames to generate the panorama figure. The motion estimation algorithm [5] is used to compute the motion map between each frame. The information of motion map is a pattern to decide the additional spatial range from next frame with the current frame. Therefore, we can combine all of the additional range of each frame into a panorama mosaic. III. VIDEO PLANNING According to our system, users have to select the avatars and background scene from the cloud data base. The motion trajectory of the selected avatar on the panorama mosaic is also decided in this step. There are also two motion tracks need to be chose by users. In order to produce a realistic result, several techniques such as object size adjustment and motion clip Copyright c 2010 Future Technology Research Association International 15
4 Volume 4, Number 3, September 2013 processing includes motion interpolation/extrapolation should be considered. Object size adjustment is used to regulate the size of object at different video layer/depth. Motion interpolation and extrapolation is used to produce a suitable video length of each motion clip. A. Avatar Size Adjustment and Motion Interpolation/ Extrapolation Our cloud system can let users update every kind of avatar with different resolution. However, it will be difficult to compute the camera parameters which are the references to adjust the size of avatar or the depth of the background video. Therefore, we assume each object s height is 170 cm and we can modify all of the avatars into the same size. The size can be simulated as the quarter of the height of the panorama figure if the avatar is set in the middle of the video. According to this method, we can adjust all of the avatars from the data base into a suitable size and make the generating video realistic. For generating a meaningful video narrative, an algorithm of motion interpolation/extrapolation of avatars is used in this paper for producing a suitable video length of each motion clip. In an earlier work [3], authors propose an algorithm which can extract an object from video and analyze object motion in the whole video sequence. The work [3] is summarized as follows. Since the detail motion information of an object is obtained, the procedure of motion interpolation can be used to generate a new posture of object from the original motion clip. B. Motion Track and Video Planning The basic element to tell a story in a video narrative is called motion clip. A motion clip can be a target object tracked from a video. This video is not neces-sary obtained from the same camera as the background video. Essentially, a mo-tion clip relies on the user to choose a motion track in the story. The challenge is-sue is to provide a spatiotemporal placement of motion clips as well as to define how motion tracks can be merged. The algorithm for story generation is based on the panorama as a time line. In our current implementation, we have implemented two types of motion tracks: Regular Motion Track: We can let users to choose the speed of the avatar s movement. It can be modified by above step. Extrapolated Motion Track: The cycle of the avatar s motion can be repeated until the end of the background video. Special Behavior Track: The above two types of motion tracks can be merged to create new motion tracks. The spatiotemporal placement algorithm takes several steps: 1. The user has to select a motion clip, decide its motion track starting loca-tion, and give extra parameters such as speed and length of extrapolation. 2. For each motion track 2.1. Compute time slots of appearance 2.2. Compute relative position of the object on frame 3. The user has to select a playback speed. The playback tool decides posi-tion of frames in the panorama and combines frames into a video. IV. EXPERIMENT RESULTS We demonstrate video planning in this section. Each example combines different layers from at least two video clips. The tone and size of all foreground objects are adjusted. In our experiments, the block size used in the object tracking is 7-by-7 pixels. For the performance, we choose 7- by-7 pixels as the tracking size. The patch size used in the proposed motion interpolation procedure is 3-by-3-by-3 pixels. Some examples are illustrated in Fig. 4. The Fig. 4 (a) shows the steps of users design. Users can select avatars from the database and design the lines of the avatars in the generating panorama. From N1 to N4 are four sets of video narratives. N1 illustrates three original persons removed and a flip person inserted. N2 illustrates special effects such as avatars walking in different speed and spark motion. N3 and N4 demonstrate two kinds of motion combined and interpolated then inserted in different layers and video scenes. Table 1 summarizes the computation time in different steps, with respect to the examples shown in Fig. 4. We subdivide the video processing procedure into the following steps, as shown in Fig Object Tracking and Extracting: This algorithm is described in section 2. The target object can be extracted and the background can be restored by inpainting technology. 2. Panorama Generation: In section 2, b, we explain how to generate a pano-rama from the inpainting background clips. 3. Avatar Size Adjustment: We have to consider about the size of avatar to make the result can be normalized. 4. Motion Interpolation: This step includes several portions, including motion analysis, patch assertion, and motion completion via inpainting. Mainly, the algo-rithms discussed in Section 3.A are evaluated. 5. Video Planning: In this step, we have to make the avatars combine with the background. TABLE I. THE PERFORMANCE OF EACH STEP IN THIS SYSTEM. Video Sequences N1 75* frames 1. Object Tracking and Extracting 2. Panorama Generation 3. Avatar Size Adjustment 4. Motion Interpolation 5. Video Planning Total 16 Copyright c 2010 Future Technology Research Association International
5 N2 N3 N4 102*181, 153*140, 140* * * frames frames frames (a) User design steps N1: N1: N1: N2: N2: Copyright c 2010 Future Technology Research Association International 17
6 Volume 1, Number 1, December 2013 N2: N3: N3: N3: N4: N4: N4: Figure 4. Experimental s V. CONCLUSION We proposed a series of mechanisms to generate video narratives from existing video clips. We use patch referencing and frame blending to generate a video sce-ne as a base for the user to plan for video tracks in video narrative generation. Object s tone and size adjustment are used to make the result more realistic. We allow video layers in different videos to be combined by adjusting the saturation, the intensity, and the spatiotemporal placement of layers. We demonstrate the feasibility of using our mechanisms for special effect production in digital movies. Applications of our mechanism include video forgery and special effect production. There still exist few limitations in our proposed algorithm. We consider the following issues as our future work: Non-repeated motions are not able to extrapolate. We are investigating a sophisticated interpolation methods based on 3-D reconstruction techniques. 18 Copyright c 2010 Future Technology Research Association International
7 Realistic shadows cannot be produced by our algorithm. The selected video which is used to generate a video scene should at least contain a foreground object for regulating the size of object correctly. REFERENCES [1] F. Charles, M. Cavazza, S. J. Mead, O. Martin, Alok Nandi, Xavier Marichal, Compelling experiences in mixed reality interactive storytelling, Proceedings of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, [2] Z. H. Khan, I.Y. Gu and A. Backhouse, "Robust Visual Object Tracking using Multi-Mode Anisotropic Mean Shift and Particle Filters," IEEE Transactions on Circuits and Systems for Video Technology, Vol. 21 (1) pp , [3] J. C. Tsai, H.-H. Hsu, S.-M. Chang, Y.-H. Wang, C. C. Chao and T. K. Shih, Motion Extraction via Human Motion Analysis, Journal of Mobile Multimedia, Vol. 6(1), 63-72, [4] T. K. Shih, N. C. Tang and J. -N. Hwang, "Exemplar-based Video Inpainting without Ghost Shadow Artifacts by Maintaining Temporal Continuity," IEEE Transactions on Circuits and Systems for Video Technology, Vol. 19, Issue 3, March 2009 pp.: [5] C. -H. Cheung, L. -M. Po, Novel cross-diamond-hexagonal search algorithms for fast block motion estimation, IEEE Trans. on Multimedia, Volume 7, Issue 1, pp , Feb [6] C Rother, V Kolmogorov and A Blake, ""GrabCut": interactive foreground extraction using iterated graph cuts," ACM Trans. Graph., vol. 23, pp , [7] D. Comaniciu and P. Meer, Mean Shift: A Robust Approach toward Feature Space Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, NO. 5, May [8] W. Zhu, C. Luo, J. Wang, and S. Li, Multimedia Cloud Computing, IEEE Signal Processing Magazine, Vol. 28 (3), pp , BIOGRAPHIES Joseph C. Tsai is a Ph. D and and a postdoctoral fellow in the Foundation of Computer Science Lab. at The University of Aizu, Japan. He received the BS and MS and Ph. D degrees from Tamkang University, Taiwan in 2006, 2008 and 2013, respectively. His research interests include computer vision, image and video processing, cloud computing and their applications. Neil Y. Yen received doctorates in Human Sciences at Waseda University, Japan, and in Engineering at Tamkang University, Taiwan in His doctorate in Waseda University was funded by the JSPS (Japan Society for the Promotion of Science) under RONPAKU program. He joins The University of Aizu, Japan as an associate professor since April Dr. Yen has been engaged extensively in an interdisciplinary field of research, where the themes are in the scope of Big Data Science, Computational Intelligence, and Human-centered Computing. Dr. Yen has been actively involved in the research community by serving as guest editors, associate editor and reviewer for international referred journals, and as organizer/chair of ACM/IEEE-sponsored conferences, workshops and special sessions. He is now a member of IEEE Computer Society, IEEE System, Man, and Cybernetics Society, and technical committee of awareness computing (IEEE SMCS). Copyright c 2010 Future Technology Research Association International 19
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed
More informationTracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey mustafa.teke@gmail.com b Department of Computer
More informationLaser Gesture Recognition for Human Machine Interaction
International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-04, Issue-04 E-ISSN: 2347-2693 Laser Gesture Recognition for Human Machine Interaction Umang Keniya 1*, Sarthak
More informationDetection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,
More informationVision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work
More informationDesign of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control
보안공학연구논문지 (Journal of Security Engineering), 제 8권 제 3호 2011년 6월 Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control Ji-Hoon Lim 1), Seoksoo Kim 2) Abstract With
More informationA method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
More informationFalse alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
More informationScreen Capture A Vector Quantisation Approach
Screen Capture A Vector Quantisation Approach Jesse S. Jin and Sue R. Wu Biomedical and Multimedia Information Technology Group School of Information Technologies, F09 University of Sydney, NSW, 2006 {jesse,suewu}@it.usyd.edu.au
More informationBehavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011
Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR
More informationMean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
More informationIntroduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
More informationAnalecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
More informationDevelop Computer Animation
Name: Block: A. Introduction 1. Animation simulation of movement created by rapidly displaying images or frames. Relies on persistence of vision the way our eyes retain images for a split second longer
More informationNeural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
More informationA Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology
, pp.411-418 http://dx.doi.org/10.14257/ijca.2014.7.10.38 A Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology Jung-Yoon Kim 1 and SangHun Nam 2 1 Graduate School
More informationA Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method
More informationCERTIFICATE IV IN I.T. (MULTIMEDIA) VIDEO PRODUCTION
CERTIFICATE IV IN I.T. (MULTIMEDIA) VIDEO PRODUCTION Australian College of Information Technology Gold Cost Brisbane AUSTRALIA www.acit.edu.au overview CERTIFICATE IV IN I.T. (MULTIMEDIA) VIDEO PRODUCTION
More informationVideo Affective Content Recognition Based on Genetic Algorithm Combined HMM
Video Affective Content Recognition Based on Genetic Algorithm Combined HMM Kai Sun and Junqing Yu Computer College of Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, China
More informationA Survey of Video Processing with Field Programmable Gate Arrays (FGPA)
A Survey of Video Processing with Field Programmable Gate Arrays (FGPA) Heather Garnell Abstract This paper is a high-level, survey of recent developments in the area of video processing using reconfigurable
More informationTutorial for Tracker and Supporting Software By David Chandler
Tutorial for Tracker and Supporting Software By David Chandler I use a number of free, open source programs to do video analysis. 1. Avidemux, to exerpt the video clip, read the video properties, and save
More informationInteractive person re-identification in TV series
Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu
More informationObject Tracking for Laparoscopic Surgery Using the Adaptive Mean-Shift Kalman Algorithm
Object Tracking for Laparoscopic Surgery Using the Adaptive Mean-Shift Kalman Algorithm Vera Sa-Ing, Saowapak S. Thongvigitmanee, Chumpon Wilasrusmee, and Jackrit Suthakorn Abstract In this paper, we propose
More informationCLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES
CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES 1 MYOUNGJIN KIM, 2 CUI YUN, 3 SEUNGHO HAN, 4 HANKU LEE 1,2,3,4 Department of Internet & Multimedia Engineering,
More informationAnimation Overview of the Industry Arts, AV, Technology, and Communication. Lesson Plan
Animation Overview of the Industry Arts, AV, Technology, and Communication Lesson Plan Performance Objective Upon completion of this assignment, the student will have a better understanding of career and
More informationEffective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone
Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone Young Jae Lee Dept. of Multimedia, Jeonju University #45, Backma-Gil, Wansan-Gu,Jeonju, Jeonbul, 560-759,
More informationDevelopment of a Service Robot System for a Remote Child Monitoring Platform
, pp.153-162 http://dx.doi.org/10.14257/ijsh.2014.8.5.14 Development of a Service Robot System for a Remote Child Monitoring Platform Taewoo Han 1 and Yong-Ho Seo 2, * 1 Department of Game and Multimedia,
More informationAn Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
More informationMouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
More informationBACnet for Video Surveillance
The following article was published in ASHRAE Journal, October 2004. Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes
More informationLow-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
More informationAdobe Dreamweaver Exam Objectives
Adobe Dreamweaver audience needs for a website. 1.2 Identify webpage content that is relevant to the website purpose and appropriate for the target audience. 1.3 Demonstrate knowledge of standard copyright
More informationCHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
More informationA Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
More informationBachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries
First Semester Development 1A On completion of this subject students will be able to apply basic programming and problem solving skills in a 3 rd generation object-oriented programming language (such as
More informationSuper-resolution method based on edge feature for high resolution imaging
Science Journal of Circuits, Systems and Signal Processing 2014; 3(6-1): 24-29 Published online December 26, 2014 (http://www.sciencepublishinggroup.com/j/cssp) doi: 10.11648/j.cssp.s.2014030601.14 ISSN:
More informationSaving Mobile Battery Over Cloud Using Image Processing
Saving Mobile Battery Over Cloud Using Image Processing Khandekar Dipendra J. Student PDEA S College of Engineering,Manjari (BK) Pune Maharasthra Phadatare Dnyanesh J. Student PDEA S College of Engineering,Manjari
More informationInteractive Multimedia Courses-1
Interactive Multimedia Courses-1 IMM 110/Introduction to Digital Media An introduction to digital media for interactive multimedia through the study of state-of-the-art methods of creating digital media:
More informationCS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
More informationGuide to Film Analysis in the Classroom ACMI Education Resource
Guide to Film Analysis in the Classroom ACMI Education Resource FREE FOR EDUCATIONAL USE - Education Resource- Guide to Film Analysis Page 2 CONTENTS THIS RESOURCE... 4 Characterisation... 4 Narrative...
More informationDolby Vision for the Home
Dolby Vision for the Home 1 WHAT IS DOLBY VISION? Dolby Vision transforms the way you experience movies, TV shows, and games with incredible brightness, contrast, and color that bring entertainment to
More informationLimitation of Super Resolution Image Reconstruction for Video
2013 Fifth International Conference on Computational Intelligence, Communication Systems and Networks Limitation of Super Resolution Image Reconstruction for Video Seiichi Gohshi Kogakuin University Tokyo,
More informationSemantic Video Annotation by Mining Association Patterns from Visual and Speech Features
Semantic Video Annotation by Mining Association Patterns from and Speech Features Vincent. S. Tseng, Ja-Hwung Su, Jhih-Hong Huang and Chih-Jen Chen Department of Computer Science and Information Engineering
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More informationVIRGINIA WESTERN COMMUNITY COLLEGE
36T Revised Fall 2015 Cover Page 36TITD 112 21TDesigning Web Page Graphics Program Head: Debbie Yancey Revised: Fall 2015 Dean s Review: Deborah Yancey Dean 21T Lab/Recitation Revised Fall 2015 None ITD
More informationHow To Filter Spam Image From A Picture By Color Or Color
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
More informationEffective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
More informationTABLE OF CONTENTS SURUDESIGNER YEARBOOK TUTORIAL. IMPORTANT: How to search this Tutorial for the exact topic you need.
SURUDESIGNER YEARBOOK TUTORIAL TABLE OF CONTENTS INTRODUCTION Download, Layout, Getting Started... p. 1-5 COVER/FRONT PAGE Text, Text Editing, Adding Images, Background... p. 6-11 CLASS PAGE Layout, Photo
More informationTracking Groups of Pedestrians in Video Sequences
Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal
More informationVideo-Based Eye Tracking
Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department
More informationContext-aware Library Management System using Augmented Reality
International Journal of Electronic and Electrical Engineering. ISSN 0974-2174 Volume 7, Number 9 (2014), pp. 923-929 International Research Publication House http://www.irphouse.com Context-aware Library
More informationProfessor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods
More informationData Transfer Technology to Enable Communication between Displays and Smart Devices
Data Transfer Technology to Enable Communication between Displays and Smart Devices Kensuke Kuraki Shohei Nakagata Ryuta Tanaka Taizo Anan Recently, the chance to see videos in various places has increased
More informationGraphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.
Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.
More informationA Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 benjamin@pace.edu Damian Lyons Fordham University,
More informationVideo, film, and animation are all moving images that are recorded onto videotape,
See also Data Display (Part 3) Document Design (Part 3) Instructions (Part 2) Specifications (Part 2) Visual Communication (Part 3) Video and Animation Video, film, and animation are all moving images
More informationAccurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
More informationIMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD
IMPROVING QUALITY OF VIDEOS IN VIDEO STREAMING USING FRAMEWORK IN THE CLOUD R.Dhanya 1, Mr. G.R.Anantha Raman 2 1. Department of Computer Science and Engineering, Adhiyamaan college of Engineering(Hosur).
More informationFlorida International University - University of Miami TRECVID 2014
Florida International University - University of Miami TRECVID 2014 Miguel Gavidia 3, Tarek Sayed 1, Yilin Yan 1, Quisha Zhu 1, Mei-Ling Shyu 1, Shu-Ching Chen 2, Hsin-Yu Ha 2, Ming Ma 1, Winnie Chen 4,
More informationA RFID Data-Cleaning Algorithm Based on Communication Information among RFID Readers
, pp.155-164 http://dx.doi.org/10.14257/ijunesst.2015.8.1.14 A RFID Data-Cleaning Algorithm Based on Communication Information among RFID Readers Yunhua Gu, Bao Gao, Jin Wang, Mingshu Yin and Junyong Zhang
More informationDiscovering Computers 2008. Chapter 3 Application Software
Discovering Computers 2008 Chapter 3 Application Software Chapter 3 Objectives Identify the categories of application software Explain ways software is distributed Explain how to work with application
More informationThe Study on the Graphic Design of Media art: Focusing on Projection Mapping
, pp.14-18 http://dx.doi.org/10.14257/astl.2015.113.04 The Study on the Graphic Design of Media art: Focusing on Projection Mapping Jihun Lee 1, Hyunggi Kim 1 1 Graduate School of Advanced Imaging Science,
More informationReal Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
More informationThree Methods for Making of Character Facial Animation based on Game Engine
Received September 30, 2014; Accepted January 4, 2015 Three Methods for Making of Character Facial Animation based on Game Engine Focused on Scene Composition of Machinima Game Walking Dead Chanho Jeong
More informationEfficient Coding Unit and Prediction Unit Decision Algorithm for Multiview Video Coding
JOURNAL OF ELECTRONIC SCIENCE AND TECHNOLOGY, VOL. 13, NO. 2, JUNE 2015 97 Efficient Coding Unit and Prediction Unit Decision Algorithm for Multiview Video Coding Wei-Hsiang Chang, Mei-Juan Chen, Gwo-Long
More informationSUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES
SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES ABSTRACT Florin Manaila 1 Costin-Anton Boiangiu 2 Ion Bucur 3 Although the technology of optical instruments is constantly advancing, the capture of
More informationTo determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)
Polytechnic University, Dept. Electrical and Computer Engineering EL6123 --- Video Processing, S12 (Prof. Yao Wang) Solution to Midterm Exam Closed Book, 1 sheet of notes (double sided) allowed 1. (5 pt)
More informationLecture Notes, CEng 477
Computer Graphics Hardware and Software Lecture Notes, CEng 477 What is Computer Graphics? Different things in different contexts: pictures, scenes that are generated by a computer. tools used to make
More informationLOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
More informationWHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
More informationA Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mauthner,bischof}@icg.tu-graz.ac.at
More informationVideo compression: Performance of available codec software
Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes
More informationKlaus Goelker. GIMP 2.8 for Photographers. Image Editing with Open Source Software. rocky
Klaus Goelker GIMP 2.8 for Photographers Image Editing with Open Source Software rocky Table of Contents Chapter 1 Basics 3 1.1 Preface....4 1.2 Introduction 5 1.2.1 Using GIMP 2.8 About This Book 5 1.2.2
More informationVEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
More informationLow-resolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
More informationSOUTHERN REGIONAL SCHOOL DISTRICT BUSINESS CURRICULUM. Course Title: Multimedia Grade Level: 9-12
Content Area: Business Department Course Title: Multimedia Grade Level: 9-12 Unit 1 Digital Imaging 10 Weeks Unit 2 Cell Animation 10 Weeks Unit 3 Sound Editing 10 Weeks Unit 4 Visual Editing 10 Weeks
More informationTracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
More informationPallas Ludens. We inject human intelligence precisely where automation fails. Jonas Andrulis, Daniel Kondermann
Pallas Ludens We inject human intelligence precisely where automation fails Jonas Andrulis, Daniel Kondermann Chapter One: How it all started The Challenge of Reference and Training Data Generation Scientific
More informationVision based approach to human fall detection
Vision based approach to human fall detection Pooja Shukla, Arti Tiwari CSVTU University Chhattisgarh, poojashukla2410@gmail.com 9754102116 Abstract Day by the count of elderly people living alone at home
More informationJournal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
More informationUsing Photorealistic RenderMan for High-Quality Direct Volume Rendering
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam cjam@sdsc.edu Mike Bailey mjb@sdsc.edu San Diego Supercomputer Center University of California San Diego Abstract With
More informationLesson 3: Behind the Scenes with Production
Lesson 3: Behind the Scenes with Production Overview: Being in production is the second phase of the production process and involves everything that happens from the first shot to the final wrap. In this
More informationA Method of Caption Detection in News Video
3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.
More informationThe School-assessed Task has three components. They relate to: Unit 3 Outcome 2 Unit 3 Outcome 3 Unit 4 Outcome 1.
2011 School-assessed Task Report Media GA 2 BACKGROUND INFORMATION 2011 was the final year of accreditation for the Media Study Design 2003 2011. Comments in this report refer to the School-assessed Task
More informationCircle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
More informationANIMATION a system for animation scene and contents creation, retrieval and display
ANIMATION a system for animation scene and contents creation, retrieval and display Peter L. Stanchev Kettering University ABSTRACT There is an increasing interest in the computer animation. The most of
More informationColour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
More informationTECHNOLOGY ANALYSIS FOR INTERNET OF THINGS USING BIG DATA LEARNING
TECHNOLOGY ANALYSIS FOR INTERNET OF THINGS USING BIG DATA LEARNING Sunghae Jun 1 1 Professor, Department of Statistics, Cheongju University, Chungbuk, Korea Abstract The internet of things (IoT) is an
More informationEdge tracking for motion segmentation and depth ordering
Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk
More informationA Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
More informationDesign of a NAND Flash Memory File System to Improve System Boot Time
International Journal of Information Processing Systems, Vol.2, No.3, December 2006 147 Design of a NAND Flash Memory File System to Improve System Boot Time Song-Hwa Park*, Tae-Hoon Lee*, and Ki-Dong
More informationHANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
More informationTHE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT
THE MS KINECT USE FOR 3D MODELLING AND GAIT ANALYSIS IN THE MATLAB ENVIRONMENT A. Procházka 1,O.Vyšata 1,2,M.Vališ 1,2, M. Yadollahi 1 1 Institute of Chemical Technology, Department of Computing and Control
More informationSuper-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification
1862 JOURNAL OF SOFTWARE, VOL 9, NO 7, JULY 214 Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification Wei-long Chen Digital Media College, Sichuan Normal
More informationInformation Technology Cluster
Web and Digital Communications Pathway Information Technology Cluster 3D Animator This major prepares students to utilize animation skills to develop products for the Web, mobile devices, computer games,
More informationProjection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
More informationThe Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
More informationRanked Keyword Search in Cloud Computing: An Innovative Approach
International Journal of Computational Engineering Research Vol, 03 Issue, 6 Ranked Keyword Search in Cloud Computing: An Innovative Approach 1, Vimmi Makkar 2, Sandeep Dalal 1, (M.Tech) 2,(Assistant professor)
More informationPROGRAM CONCENTRATION: Business & Computer Science. COURSE TITLE: Introduction to Animation and 3d Design
PROGRAM CONCENTRATION: Business & Computer Science CAREER PATHWAY: Interactive Media COURSE TITLE: Introduction to Animation and 3d Design Introduction to Animation and 3d Design is a foundations course
More information