Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction
|
|
|
- Eleanore Stafford
- 10 years ago
- Views:
Transcription
1 Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction Maximilian Krüger, Achim Schäfer, Andreas Tewes, Rolf P. Würtz Institut für Neuroinformatik, Ruhr-Universität Bochum Abstract: We present our idea of solving parts of the vision task with an organic computing approach. We have designed a multiagent system (MAS) of many different modules working on different tasks towards the same goal, the recognition of a human gesture. We will introduce our point of view of an agent and how a group of agents can cooperate with other organically inspired techniques like the democratic integration for information merging and bunch graph matching for face/object recognition. In this article we want to explain the need, the concept and the software for our work. Two applications currently limited to the tracking problem, demonstrate that the concept is fruitful and in use. 1 Introduction Human computer interaction (HCI) is one of the future s main tasks, and its improvement is the goal of our work. Although there are many ways of communicating with a machine like speech and touch-screen, we have focused on computer vision, a field in science that has shown many advances, but still is under construction [vdm04]. We split the task of human gesture recognition in two areas. The first part is the analysis of visual data, the identification and tracking of human extremities. To solve this problem, multiple modalities like color, texture, motion, etc have to be interpreted. In the second part, gesture recognition based on the collected information is performed. Both areas shall form a system that can handle varying environmental conditions. Each work step has to solve the same kind of problem: imperfect information sources. We believe that organic computing, with its self-x properties is leading into the right direction. One of the key concepts of organic computing is sub-system integration. The technique of democratic integration [TvdM01] implements a self-organizing integration of cues with self-healing properties. It has proven to work quite well for simple tracking problems. As we started to extend this method, we realized that the existing implementation was too rigid. The need for multiple independent instances of trackers working on their individual set of cues and the integration of top down information led us to the construction of a multiagent system (MAS). Our architecture is built in a modular fashion, agents are working on different tasks in different layers. These agents show the following self-x properties. They are autonomous, have a perception of their surrounding to which they can adapt and they are capable of communicating with other entities. The breakdown of sensor information and a changing environment is treated in a robust and flexible manner. Our multiagent network can 641
2 handle agents with tasks ranging from coordination of sub processes, the tracking of an image point up to the recognition of human extremities. To accomplish our aim of human gesture recognition, teamwork between agents and a hierarchically ordered structure are essential. The following sections present our approach of a MAS and sketch the software architecture. Because the system is still under development, the two applications cover only the tracking part of the task. 2 Framework As a realization of interacting entities, a MAS was chosen. The design is modular with well defined interfaces. The components are agents (with a cue integrator and different sensors as sub-modules), a blackboard and an environment. The environment supplies information about the world. Based on the desired functionality of the whole system, this can simply be an image sequence. The environment may, however, also supply preprocessed images like e.g. gray value images and difference images treated as different modalities. An agent is an autonomous entity, which gets information from the environment and processes it to reach a decision about further actions. Each agent can decide which parts of the information provided by the environment it requests and makes available to its sensors. They in turn individually process their information and represent the agent s perception of the environment. Sensors can be flexibly attached or detached during run-time. In each agent, integration of the sensors output is done by a cue integrator, which is implemented regarding the agent s general goal and according to the specific sensors it uses. Alternatively, an agent can consult knowledge sources which provide information about the world stored in the system. We call these agents top down agents. For example, the face recognition agent presented in the next section is a top down agent that uses general face knowledge. Training the top down agents, i.e. learning the world knowledge from examples is also a crucial task requiring organic computing methodology, see [Wü04]. Communication within the system is done via a blackboard. After registering with the blackboard, each agent can write messages onto the blackboard and read messages other agents have posted. All messages can be quite complex, have a defined lifetime and are posted to a group of receivers. The message handling allows the creation of new agents with specified properties, the change of properties of agents and also the elimination of agents during run-time, provided they were given this functionality. This architecture offers the possibility to establish a hierarchy of agents, where higher agents manage lower ones and collect, process and evaluate the information from them. Altogether, the software framework enables to develop autonomous, communicating and possibly hierarchically ordered multiagent systems. So far, the framework is not restricted to vision, but can incorporate non visual modalities as well. 642
3 3 Applications As we are working on HCI, our applications are designed to solve problems like pointing gesture and sign language recognition. They depend on a reliable and robust tracking of human body extremities. Since human motion has many degrees of freedom, the tracking is faced with the problem of cluttered scenes and the possibility of objects leaving the image borders. Based on the organic computing concept and the presented framework, we developed a multiagent network with different entities working on local or global scales. We designed tracking agents whose task is to follow an object. These agents merge different visual cues like color, texture etc. Cue fusion is done by the scheme of democratic integration, which offers a flexible and robust way of tracking. For a detailed view refer to [TvdM01]. The tracking agents possess the ability to rate their action. They offer a confidence value, which assesses the tracking quality that becomes lower if the individual agent is loosing its target. Due to the fact that they are only working on a small part of the image they are called local agents. The network is hierarchically organized, information from the tracking agent (like position and movement of the object) are sent to higher global working stages. These are currently monitoring the tracking agents, instantiating and deleting them. Further high level agents will be implemented in the course of the project. 3.1 Hand Gesture Recognition For hand gesture recognition a special agent is performing a global search for skin colored and moving objects. If successful, it can assign a tracking agent to the object. Tracking agents can be seen in figure 1. Their region of interest is marked by a gray rectangle. The white circle encloses the estimated target position. Because tracking is a difficult task, we need a self-controlling mechanism. If the tracking agent is not convinced of its tracking result, if two agents are tracking the same object or if the object being tracked is of no use for the gesture recognition, the agent has the possibility of self-destruction, a self-healing mechanism of the system. The tracking agent receives information from top down agents like a face detecting agent. This agent checks whether an agent is following a face. Face recognition is performed by bunchgraph matching [WFKvdM97] see also [Wü04]. In figure 1 the dark gray rectangle indicates that this tracking agent was informed by the face detecting agent that the tracked object is a face. This and other top down knowledge that is brought to the system supports the classifying of objects and gestures. The first results are encouraging, some more work will be put into the dynamic to make the tracking more robust and reliable. 3.2 Arm Gesture Analysis To identify arm or body gestures, motion of points other than the hand and the face need to be analyzed. This task induces additional difficulties because e.g. the color and the 643
4 Figure 1: Parts of an image sequence of a hand tracking process. Tracking agents region of interest is marked by a gray rectangle and their target position by a circle. texture of clothes have a greater variety than skin color. In the current state, the application supports user assisted learning of arm gesture models. With one mouse click per agent, the user needs to position tracking agents at pre-defined points at the body of the person performing the arm gesture. The tracking agents sensors and parameters can be chosen individually. Another agent is designed to control the birth of new tracking agents at the given positions with the given parameters and to monitor their activity. It collects and saves all data, which can then be analyzed. The short-term objective is the analysis of the tracking process. These results will help to increase the tracking performance in other applications. On the one hand, we want to understand better which sensor configuration at which position contributes to the performance of a tracking agent. On the other hand, we want to learn what is important to successfully discriminate one gesture against the others. The analysis should give information about the essential features of gestures in general and of the specific gesture in particular. Learned properties of different arm gestures can then be used for efficient gesture recognition. In future versions, successful tracking parameter sets shall be determined automatically by the process of self-organization and self-monitoring. The learning itself and the gesture recognition based on the learning still need to be realized. 644
5 At the current state of development, the system must be bootstrapped with manual selection of points to be tracked. This burden will be removed from the user by passing over decisions gradually to specialized agents. 4 Discussion We have set up a flexible framework for communicating agents and we have shown the usefulness and flexibility with two applications. Tracking of points especially hands and head in the hand gesture recognition application already works quite well. Furthermore, the application for arm gesture analysis provides an interface for learning parameters, where tracking agents can easily be positioned on image sequences and the data can be analyzed. Positioning other types of agents can easily be built in. In the future, the communication between the agents will be used to produce a positive feedback that will further stabilize the system. Learning relative positions of human body extremities from examples is another goal in order to achieve gesture recognition. Also, more top down knowledge like a hand recognition agent will be attached to the system. Up to now, a module for the recognition of different gestures is in the conceptual state. We have presented an approach to gesture recognition by organic computing methodology. A multiagent system has been chosen because it meets the flexibility and self-healing requirements, which are crucial for the task of, ultimately, understanding the movements a user performs in front of a camera and make this understanding useful for human-friendly human computer interaction. Acknowledgements: Funding by the European Commission in the Research and Training Network MUHCI (HPRN-CT ) is gratefully acknowledged. References [TvdM01] [vdm04] Triesch, J. und von der Malsburg, C.: Democratic integration: Self-organized integration of adaptive cues. Neural Computation. 13(9): September von der Malsburg, C.: Vision as an exercise in Organic Computing. In: This workshop [WFKvdM97] Wiskott, L., Fellous, J.-M., Krüger, N., und von der Malsburg, C.: Face recognition by elastic bunch graph matching. IEEE Transactions on Pattern Analysis and Machine, Intelligence. 19(7): [Wü04] Würtz, R. P.: Organic Computing for face and object recognition. In: This workshop
Vision-Based Tracking and Recognition of Dynamic Hand Gestures
Vision-Based Tracking and Recognition of Dynamic Hand Gestures Dissertation zur Erlangung des Grades eines Doktors der Naturwissenschaften in der Fakultät für Physik und Astronomie der Ruhr-Universität
Advances in Face Recognition Research Second End-User Group Meeting - Feb 21, 2008 Dr. Stefan Gehlen, L-1 Identity Solutions AG, Bochum, Germany
Advances in Face Recognition Research Second End-User Group Meeting - Feb 21, 2008 Dr. Stefan Gehlen, L-1 Identity Solutions AG, Bochum, Germany L-1 Identity Solutions AG All rights reserved Outline Face
Interactive Projector Screen with Hand Detection Using LED Lights
Interactive Projector Screen with Hand Detection Using LED Lights Padmavati Khandnor Assistant Professor/Computer Science/Project Mentor Aditi Aggarwal Student/Computer Science/Final Year Ankita Aggarwal
Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.
Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.
Sensor Integration in the Security Domain
Sensor Integration in the Security Domain Bastian Köhler, Felix Opitz, Kaeye Dästner, Guy Kouemou Defence & Communications Systems Defence Electronics Integrated Systems / Air Dominance & Sensor Data Fusion
GLOVE-BASED GESTURE RECOGNITION SYSTEM
CLAWAR 2012 Proceedings of the Fifteenth International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Baltimore, MD, USA, 23 26 July 2012 747 GLOVE-BASED GESTURE
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
The Human Element in Cyber Security and Critical Infrastructure Protection: Lessons Learned
The Human Element in Cyber Security and Critical Infrastructure Protection: Lessons Learned Marco Carvalho, Ph.D. Research Scientist [email protected] Institute for Human and Machine Cognition 40 South
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Manfred Gruber Safe and Autonomous Systems
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Internet based manipulator telepresence
Internet based manipulator telepresence T ten Kate, P Zizola, B Driessen, K van Woerden TNO Institute of Applied Physics, Stieltjesweg 1, 2628 CK DELFT, The NETHERLANDS {tenkate, zizola, driessen, vwoerden}@tpd.tno.nl
A Real Time Hand Tracking System for Interactive Applications
A Real Time Hand Tracking System for Interactive Applications Siddharth Swarup Rautaray Indian Institute of Information Technology Allahabad ABSTRACT In vision based hand tracking systems color plays an
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
CROPS: Intelligent sensing and manipulation for sustainable production and harvesting of high valued crops, clever robots for crops.
CROPS GA 246252 www.crops-robots.eu CROPS: Intelligent sensing and manipulation for sustainable production and harvesting of high valued crops, clever robots for crops. The main objective of CROPS is to
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems
An Architecture for Interaction Event Processing in Tabletop Systems
Sensyble Workshop 2010 manuscript No. (will be inserted by the editor) An Architecture for Interaction Event Processing in Tabletop Systems Simon Lehmann 1, Ralf Dörner 1, Ulrich Schwanecke 2, Johannes
Advanced Volume Rendering Techniques for Medical Applications
Advanced Volume Rendering Techniques for Medical Applications Verbesserte Darstellungsmethoden für Volumendaten in medizinischen Anwendungen J. Georgii 1, J. Schneider 1, J. Krüger 1, R. Westermann 1,
Appendices master s degree programme Artificial Intelligence 2014-2015
Appendices master s degree programme Artificial Intelligence 2014-2015 Appendix I Teaching outcomes of the degree programme (art. 1.3) 1. The master demonstrates knowledge, understanding and the ability
Eye-tracking. Benjamin Noël
Eye-tracking Benjamin Noël Basics Majority of all perceived stimuli through the eyes Only 10% through ears, skin, nose Eye-tracking measuring movements of the eyes Record eye movements to subsequently
Questions? Assignment. Techniques for Gathering Requirements. Gathering and Analysing Requirements
Questions? Assignment Why is proper project management important? What is goal of domain analysis? What is the difference between functional and non- functional requirements? Why is it important for requirements
PolyLens: Software for Map-based Visualization and Analysis of Genome-scale Polymorphism Data
PolyLens: Software for Map-based Visualization and Analysis of Genome-scale Polymorphism Data Ryhan Pathan Department of Electrical Engineering and Computer Science University of Tennessee Knoxville Knoxville,
Robotic Apple Harvesting in Washington State
Robotic Apple Harvesting in Washington State Joe Davidson b & Abhisesh Silwal a IEEE Agricultural Robotics & Automation Webinar December 15 th, 2015 a Center for Precision and Automated Agricultural Systems
Intelligent Human Machine Interface Design for Advanced Product Life Cycle Management Systems
Intelligent Human Machine Interface Design for Advanced Product Life Cycle Management Systems Zeeshan Ahmed Vienna University of Technology Getreidemarkt 9/307, 1060 Vienna Austria Email: [email protected]
Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features
Remote Sensing and Geoinformation Lena Halounová, Editor not only for Scientific Cooperation EARSeL, 2011 Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods
Video Processing for Surveillance and Security Appllications
Video Processing for Surveillance and Security Appllications EuroSDR & ISPRS Workshop 2008 Geosensor Networks G. Saur M. Müller, Th. Müller, E. Monari, N. Heinze, W. Krüger, M. Esswein 1 Outline M3motion
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
Mapping an Application to a Control Architecture: Specification of the Problem
Mapping an Application to a Control Architecture: Specification of the Problem Mieczyslaw M. Kokar 1, Kevin M. Passino 2, Kenneth Baclawski 1, and Jeffrey E. Smith 3 1 Northeastern University, Boston,
Test Coverage Criteria for Autonomous Mobile Systems based on Coloured Petri Nets
9th Symposium on Formal Methods for Automation and Safety in Railway and Automotive Systems Institut für Verkehrssicherheit und Automatisierungstechnik, TU Braunschweig, 2012 FORMS/FORMAT 2012 (http://www.forms-format.de)
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
How To Get A Computer Engineering Degree
COMPUTER ENGINEERING GRADUTE PROGRAM FOR MASTER S DEGREE (With Thesis) PREPARATORY PROGRAM* COME 27 Advanced Object Oriented Programming 5 COME 21 Data Structures and Algorithms COME 22 COME 1 COME 1 COME
Environments, Services and Network Management for Green Clouds
Environments, Services and Network Management for Green Clouds Carlos Becker Westphall Networks and Management Laboratory Federal University of Santa Catarina MARCH 3RD, REUNION ISLAND IARIA GLOBENET 2012
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
Automatic parameter regulation for a tracking system with an auto-critical function
Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: [email protected] Abstract In this article we propose
3D DESIGN // 3D ANIMATION YES, WE DO VISUALIZING REALITY
3D DESIGN // 3D ANIMATION YES, WE DO VISUALIZING REALITY 3D DESIGN RENDERINGS AND ANIMATIONS A NEW PHOTOREALISM 3D visualizations and animations provide opportunities that can barely be achieved with conventional
Hand Analysis Tutorial Intel RealSense SDK 2014
Hand Analysis Tutorial Intel RealSense SDK 2014 With the Intel RealSense SDK, you have access to robust, natural human-computer interaction (HCI) algorithms such as face tracking, finger tracking, gesture
A Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 [email protected] Damian Lyons Fordham University,
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Frequently Asked Questions About VisionGauge OnLine
Frequently Asked Questions About VisionGauge OnLine The following frequently asked questions address the most common issues and inquiries about VisionGauge OnLine: 1. What is VisionGauge OnLine? VisionGauge
Intellect Platform - The Workflow Engine Basic HelpDesk Troubleticket System - A102
Intellect Platform - The Workflow Engine Basic HelpDesk Troubleticket System - A102 Interneer, Inc. Updated on 2/22/2012 Created by Erika Keresztyen Fahey 2 Workflow - A102 - Basic HelpDesk Ticketing System
Automated Recording of Lectures using the Microsoft Kinect
Automated Recording of Lectures using the Microsoft Kinect Daniel Sailer 1, Karin Weiß 2, Manuel Braun 3, Wilhelm Büchner Hochschule Ostendstraße 3 64319 Pfungstadt, Germany 1 [email protected] 2 [email protected]
Business-Driven Software Engineering Lecture 3 Foundations of Processes
Business-Driven Software Engineering Lecture 3 Foundations of Processes Jochen Küster [email protected] Agenda Introduction and Background Process Modeling Foundations Activities and Process Models Summary
Force/position control of a robotic system for transcranial magnetic stimulation
Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
Industrial Vision Days 2012 Making Cameras Smarter: FPGA Based Image Pre-processing Unleashed
Industrial Vision Days 2012 Making Cameras Smarter: FPGA Based Image Pre-processing Unleashed Announcement of Partnership Seite: 3 High Quality Digital Cameras and Vision Sensors Visual FPGA Programming
Design and Development of SMS Based Wireless Home Appliance Control and Security System
Journal of Modern Science and Technology Vol. 3. No. 1. March 2015 Issue. Pp.80-87 Design and Development of SMS Based Wireless Home Appliance Control and Security System Md. Abdullah Al Asad *, Md. Al
Eye Contact in Leisure Video Conferencing. Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway.
Eye Contact in Leisure Video Conferencing Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway 19 November 2012 Abstract This paper presents systems which enable eye contact in leisure
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Character Creation You can customize a character s look using Mixamo Fuse:
Using Mixamo Fuse, Mixamo, and 3ds Max, you can create animated characters for use with FlexSim. Character Creation You can customize a character s look using Mixamo Fuse: After creating the character,
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof
A Robust Multiple Object Tracking for Sport Applications 1) Thomas Mauthner, Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {mauthner,bischof}@icg.tu-graz.ac.at
Distributed Database for Environmental Data Integration
Distributed Database for Environmental Data Integration A. Amato', V. Di Lecce2, and V. Piuri 3 II Engineering Faculty of Politecnico di Bari - Italy 2 DIASS, Politecnico di Bari, Italy 3Dept Information
Modeling and Design of Intelligent Agent System
International Journal of Control, Automation, and Systems Vol. 1, No. 2, June 2003 257 Modeling and Design of Intelligent Agent System Dae Su Kim, Chang Suk Kim, and Kee Wook Rim Abstract: In this study,
Design and Development of SMS Based Wireless Home Appliance Control and Security System
Design and Development of SMS Based Wireless Home Appliance Control and Security System Md. Abdullah Al Asad 1, Md. Al Muzahid 2 and Md. Saifuddin Faruk 3 The rapid change in the wireless communication
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,
A General Framework for Tracking Objects in a Multi-Camera Environment
A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Example of Standard API
16 Example of Standard API System Call Implementation Typically, a number associated with each system call System call interface maintains a table indexed according to these numbers The system call interface
From Product Management Telephone Nuremberg
Release Letter Product: Version: IVA Intelligent Video Analysis 4.50 1. General Intelligent Video Analysis (IVA) version 4.50 is the successor of IVA 4.00. IVA is a continuously growing product with an
DESIGN OF DIGITAL SIGNATURE VERIFICATION ALGORITHM USING RELATIVE SLOPE METHOD
DESIGN OF DIGITAL SIGNATURE VERIFICATION ALGORITHM USING RELATIVE SLOPE METHOD P.N.Ganorkar 1, Kalyani Pendke 2 1 Mtech, 4 th Sem, Rajiv Gandhi College of Engineering and Research, R.T.M.N.U Nagpur (Maharashtra),
Towards an Organic Middleware for the Smart Doorplate Project
Towards an Organic Middleware for the Smart Doorplate Project Wolfgang Trumler, Faruk Bagci, Jan Petzold, Theo Ungerer University of Augsburg Institute of Computer Science Eichleitnerstr. 30, 86159 Augsburg,
MULTI AGENT-BASED DISTRIBUTED DATA MINING
MULTI AGENT-BASED DISTRIBUTED DATA MINING REECHA B. PRAJAPATI 1, SUMITRA MENARIA 2 Department of Computer Science and Engineering, Parul Institute of Technology, Gujarat Technology University Abstract:
Semantic Interoperability for Ambient Assisted Living
Semantic Interoperability for Ambient Assisted Living OASIS Symposium 10th International Semantic Web Conference (ISWC) Bonn, Germany October 24, 2011 14:00 18:00 Prof. Dr.-Ing. Ralph Welge Stand 23.10.2011
Network Mission Assurance
Network Mission Assurance Michael F. Junod, Patrick A. Muckelbauer, PhD, Todd C. Hughes, PhD, Julius M. Etzl, and James E. Denny Lockheed Martin Advanced Technology Laboratories Camden, NJ 08102 {mjunod,pmuckelb,thughes,jetzl,jdenny}@atl.lmco.com
A Multi Agent System for MS Windows using Matlab-enabled Agents
A Multi Agent System for MS Windows using Matlab-enabled Agents Th.M. Hupkens a R.Thaens b a Royal Netherlands Naval College, Combat Systems Department, P.O. Box 10000, 1780 CA Den Helder, The Netherlands
Human behavior analysis from videos using optical flow
L a b o r a t o i r e I n f o r m a t i q u e F o n d a m e n t a l e d e L i l l e Human behavior analysis from videos using optical flow Yassine Benabbas Directeur de thèse : Chabane Djeraba Multitel
SIMATIC. WinCC V7.0. Getting started. Getting started. Welcome 2. Icons 3. Creating a project 4. Configure communication 5
SIMATIC WinCC V7.0 SIMATIC WinCC V7.0 Printout of the Online Help 1 Welcome 2 Icons 3 Creating a project 4 Configure communication 5 Configuring the Process Screens 6 Archiving and displaying values 7
PERSONALIZED WEB MAP CUSTOMIZED SERVICE
CO-436 PERSONALIZED WEB MAP CUSTOMIZED SERVICE CHEN Y.(1), WU Z.(1), YE H.(2) (1) Zhengzhou Institute of Surveying and Mapping, ZHENGZHOU, CHINA ; (2) North China Institute of Water Conservancy and Hydroelectric
Contents. Dedication List of Figures List of Tables. Acknowledgments
Contents Dedication List of Figures List of Tables Foreword Preface Acknowledgments v xiii xvii xix xxi xxv Part I Concepts and Techniques 1. INTRODUCTION 3 1 The Quest for Knowledge 3 2 Problem Description
integrated lights-out in the ProLiant BL p-class system
hp industry standard servers august 2002 integrated lights-out in the ProLiant BL p-class system technology brief table of contents executive summary 2 introduction 2 management processor architectures
Bio-inspired mechanisms for efficient and adaptive network security
Bio-inspired mechanisms for efficient and adaptive network security Falko Dressler Computer Networks and Communication Systems University of Erlangen-Nuremberg, Germany [email protected]
Structural Health Monitoring Tools (SHMTools)
Structural Health Monitoring Tools (SHMTools) Getting Started LANL/UCSD Engineering Institute LA-CC-14-046 c Copyright 2014, Los Alamos National Security, LLC All rights reserved. May 30, 2014 Contents
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
Gestures Body movements which convey meaningful information
Part 14: Interaction in VR: Speech and Gesture Input Virtuelle Realität Wintersemester 2006/07 Prof. Bernhard Jung Gestures Body movements which convey meaningful information Gesture types 1. emblematic
