Most collaboration tools such as videoconferencing
|
|
|
- Cuthbert Ellis
- 9 years ago
- Views:
Transcription
1 Spatial Interfaces Editors: Bernd Froehlich and Mark Livingston Beaming: An Asymmetric Telepresence System Anthony Steed, William Steptoe, Wole Oyekoya, Fabrizio Pece, Tim Weyrich, and Jan Kautz University College London Doron Friedman Interdisciplinary Center Herzliya Angelika Peer Technische Universität München Massimiliano Solazzi, Franco Tecchia, and Massimo Bergamasco Scuola Superiore Sant Anna Mel Slater ICREA University of Barcelona Most collaboration tools such as videoconferencing and collaborative virtual environments (CVEs) provide symmetric access to the shared medium. For example, in videoconferencing, each person usually sees a view of the other participants and their surroundings. Although these systems can be configured similarly to face-to-face meetings, they lack some of those meetings immediacy. Researchers (including some of us) have argued that this is due partly to the systems technical limitations, including the lack of representation of the full 3D space of the conversation. One alternative that supports full 3D space is collaborative VR in which each user is surrounded by and immersed within the display systems. 1 However, technologies tend to be laboratory based and relatively uncommon. So, participants normally can t access these systems without leaving their usual work or living spaces. The Beaming project (Being in Augmented Multimodal Naturally Networked Gatherings; www. beaming-eu.org) has tackled technological and access issues head on. We ve abandoned the symmetry of access to the collaboration technology and the notion of a novel virtual environment in which collaboration happens. Instead, we focus on recreating, virtually, a real environment and having remote participants visit that virtual model. The display systems can be in any reasonable space such as an office or meeting room, domestic environment, or social space. An Overview Figure 1 gives an overview of the Beaming system. The destination is a real space populated with people we call locals. The transporter is a high-end VR system equipped with 3D surround visuals, 3D surround audio, tactile and haptic systems, and biosensing. The visitor is the transporter s user. The system aims to capture the destination and display it to the visitor and simultaneously capture the visitor and display him or her to the locals. One goal of Beaming is that the destination shouldn t be a laboratory space with carefully calibrated equipment. Any technical intervention must be portable or mobile, self-calibrating, and dynamically configurable. It should also be as unobtrusive as possible so that it doesn t interfere with the locals behavior. Figure 2 represents potential interventions, which include mobile robots, situated displays, wall or environment projections, augmented reality, camera capture, and audio capture. User Experience Goals Amy Voida and her colleagues noted that throughout more than 20 years of media space research, a recurrent theme has been the pursuit of symmetry, which has both social and technological components. 2 Social symmetry is desirable so that 10 November/December 2012 Published by the IEEE Computer Society /12/$ IEEE
2 Visitor Locals Display destination Capture destination Capture visitor Display visitor \\\\ \ Transporter Destination Figure 1. The Beaming system recreates a real environment (the destination) populated with people (locals). A remote participant (the visitor) visits that virtual model via the transporter. The arrow labels indicate the four major technical challenges. no participant is disadvantaged in his or her ability to interact with or influence other participants. Technologies such as videoconferencing and CVEs generally feature technological symmetry to ensure that the same sensory cues are available to all parties. Both these technologies have limitations. Videoconferencing poorly supports eye gaze between participants 3 and spatial references between users and objects in their environment. 1,4 CVEs typically involve a restricted interface so that users see the virtual world as if through a window. Steve Benford and his colleagues suggested that the interest in spatial approaches to computersupported cooperative work might be viewed as a shift of focus toward supporting the context in which distributed work takes place, rather than the process of the work itself. 5 The destinationvisitor paradigm in Beaming is fundamentally technologically asymmetric but aims to support symmetric social interaction between the visitor and locals. That social interaction should be able to exploit the objects at the destination because this is a key component of the context in which the interaction takes place. The long-term goal for Beaming-like systems is to provide collaborative mixed-reality environments that grant symmetrical social affordances and sensory cues to all connected users whether they re locals or visitors. Put another way, although the mediating technologies are highly asymmetric between the destination and transporter sites, visitors behavior shouldn t be hindered because of their remote location. Also, they should be represented to the locals with a virtual or physical embodiment. Using terminology from the virtual-environments field, we might say that we strive to give a sense of spatial presence within Displays Surround cameras and microphones Mobile platform Wide-angle projectors the destination for visitors and a sense of copresence among both locals and visitors. Consequently, we claim that the visual display at the visitor site must be an immersive display such as a head-mounted display (HMD) or a display similar to a CAVE (cave automatic virtual environment). This is because, to strive for social symmetry, the system must provide similar sensory experiences, particularly the dominant visual mode, to all parties. Locals need no visual mediation to perceive the destination as being realistic and spatial because they perceive the actual physical location. However, stimuli representing the destination must be transmitted in real time to \\\\ \ Augmented-reality displays Figure 2. Types of technical intervention at the destination. The user experience goals, which arise from access issues, constrain the space of potential interventions. IEEE Computer Graphics and Applications 11
3 Spatial Interfaces (a) (b) (c) (d) (e) Figure 3. The Beaming acting scenario. (a) The visitor actor wears a head-mounted display and a motion-tracking suit. (b) The local actor is at the destination, seeing visualizations of the visitor actor on a SphereAvatar display and a wall projection. A Kinect camera tracks the local actor, and a surround camera is below and to the right of the SphereAvatar. The visitor actor can see three visualizations of the local actor: (c) the image from the surround camera, (d) the reconstructed point cloud from the Kinect depth map inserted in a static 3D model of the destination, and (e) an avatar of the local from the Kinect skeleton tracking inserted in the static 3D model. the visitor site. Moreover, the technological display properties used by the visitor must foster the impression of being physically at the destination. To this end, Ralph Schroeder considered immersive technology as an end state in that synthetic and multiuser environments that are displayed to the senses can t be developed further than fully surrounding and spatial immersive systems. 6 We expect that immersive presentation will increase the social symmetry in an asymmetric, heterogeneous system architecture. A Beaming Example One test case for the Beaming concept has been remote rehearsal between two actors and a director. 7 Figure 3 shows the capture and display devices used to connect the actors. In this example, the visitor was an actor using an HMD and wearing a motion capture suit so that her movements were accurately tracked (see Figure 3a). The single local was also an actor in an office environment. In that office, a SphereAvatar display showed the visitor s head and viewing direction, and a large projection screen showed her body movements (see Figure 3b). The local could choose between the two representations. At this time, we hadn t used any augmentedreality or robotic representations. We captured the local with Microsoft Kinect and a surround camera, a Point Grey Research Ladybug 3. We had constructed the office offline via photogrammetry. So, the visitor could see one of three visualizations: a surround image (see Figure 3c), a 3D model with an inserted depth video of the local (see Figure 3d), and the 3D model with an animated avatar of the local (see Figure 3e). The two actors wore wireless microphones and talked to each other using Skype. A theatrical rehearsal aims to practice and refine the performance of a scene. We chose a scene consisting of rich conversation, spatial action and interaction, direction of attention, and object handling, which are all common practice in general social interaction and collaborative work. So, through repeated and evolving run-throughs of the scene with professional actors and directors, this structured activity formed an excellent basis for analytic knowledge regarding the system s successes and failures. The rehearsals explored the central aspects of spatiality and embodiment. Overall, the common spatial frame of reference (the destination) was 12 November/December 2012
4 highly conducive to acting and directing, allowing blocking, gross gestures, and unambiguous instructions to be issued. The main lesson learned was thus that we achieved spatial interaction but that multiple representations were confusing. Visitors tended to prefer the 3D model with the avatar representation of the local because it was visually consistent and presented a spatial reference relative to their own location. Locals tended to prefer the wall display because it showed the visitor s body language. The communication s central limitation was the relative lack of expressivity of the actors embodiments. This meant that moments relying on performing and reacting to facial expressions and subtle gestures were less successful. We expect SphereAvatar to be more useful when it can dynamically present the visitor s facial expressions. The Beaming Technical View The user experience goals set many technical challenges. In the Beaming project, we re investigating many of these in a range of modalities including visual, audio, and haptic scanning, representation, and display, and emotion recognition and display. We eventually want to reconstruct the whole destination in real time. However, in early demonstrations we re using a variety of prebuilt models of spaces and objects (visual, audio, and haptic), either directly for rendering or to act as a source for trackers. Here, we give examples of technical demonstrators we built and how we re integrating them to achieve the project s long-term goal. Robot Representation Beaming uses two forms of robots. Encountered-type haptic devices. Interaction with encountered-type haptic devices (see Figure 4a) lets the visitor perceive interaction with the objects at the destination or with locals. Such haptic devices approach the visitor only when contact needs to be rendered, thus guaranteeing perfect rendering of free space. Different end effectors, including a prosthetic hand and a universal object that can morph its shape, display a series of human-human and human-object interactions. For example, in Figure 4a, the robot is presenting a hand to mimic a local s actions. In this situation, the visitor is seeing a representation of the local and the destination through the HMD. The visitor can see the local s movement, which the robot matches. In this situation, the challenge is to haptically reconstruct the destination in real time and remotely, so that the encountered-type haptic device can present all physical interactions. Currently, we do this using tracking of locals employing a Kinect array and preconstructed models of the surfaces at the destination. The encountered-type haptic device can present both data types. We ve tested the overall system s functionality; a detailed evaluation is part of future research. An emerging form of telecollaboration is situated displays at a physical destination that virtually represent remote visitors. Mobile robot avatars. This avatar represents the visitor at the destination (see Figure 4b). Unlike the encountered-type haptic device, which the visitor and locals don t see, the robot avatar is broadly anthropomorphic, with two robotic arms and hands and an emotion-expressing head. The visitor s arm and hand movements are tracked by the motion capture suit and mapped to the robot s movements. A camera looks underneath the HMD to capture facial expressions; the system analyzes them to recognize emotional states, which the emotion-expressing head then conveys. Key aspects of the robot avatar s development are the remote streaming of the visitor s movement and real-time remapping of the visitor s movements to the robot s degrees of freedom. This shares some similarities with previous telepresence research, such as the telexistence systems. 8 However, Beaming emphasizes reconstruction of a 3D model of the destination for the visitor to interact with, so as to allay some issues of latency and dynamics of movement. Unlike previous research on remote haptic interaction, 9 our two haptic devices aren t directly slaved together. They re better described as loosely coupled; the encountered-type haptic device mimics the destination, and the robot avatar mimics the visitor s movements. A Situated Display to Represent the Visitor An emerging form of telecollaboration is situated displays at a physical destination that virtually represent remote visitors. SphereAvatar (see Figure 4c) is a situated display that s visible from all sides so that locals can determine the visitor s identity from any viewpoint. Flat displays are visible from only the front and lack spherical displays 360-degree and multiview capabilities. Exploiting spherical displays unique characteristics might IEEE Computer Graphics and Applications 13
5 Spatial Interfaces (a) (b) (c) (d) Figure 4. Beaming technologies in development. (a) A visitor interacting with an encountered-type haptic device that mimics the form of the destination and the locals interactions. (b) A robot avatar that mimics the visitor s movements and emotions. (c) A representation of the visitor on SphereAvatar. (d) A portable system for the visitor to get localized haptic feedback from the destination. bestow greater social presence on the participant represented on the sphere. SphereAvatar is small enough to situate almost anywhere in a room; for example, it could be on a seat next to a table or on a mobile platform. It can show either video captured from camera arrays around the visitor or a photorealistic computer-generated head. Depth Camera Streaming from the Destination Cameras that can acquire a continuous stream of depth images are commonly available for instance, Microsoft Kinect. They re particularly useful for Beaming because they let us easily capture the destination s geometric and texture information (for example, see Figure 3d). Streaming the raw data consumes significant bandwidth, so compressed streaming is desirable. Although much research exists on compressed streaming of (8-bit color) videos, little research exists on streaming (16-bit) depth videos. In particular, no codecs for depth compression are publicly available. We adapted existing video encoders for depth streaming for two reasons. 10 First, they re widely available. Second, using the same video codec to transfer both color and depth frames enhances consistency and simplifies the streaming architecture. We use a robust encoding of 16-bit depth values into 3 8 bits. This is essentially a variant of differential phase shift encoding, such that the depth maps suffer from few compression artifacts. Using this encoding, we can stream depth maps over standard VP8 and H.264 codecs. This fur- 14 November/December 2012
6 ther allows us flexibility in where analysis of the depth maps occurs. Different machines can now perform different types of analysis on the same data (for example, extracting both point clouds and skeleton data). Other Research Areas The following R&D areas add to Beaming s uniqueness as a collaborative system. Body Representation Beaming explicitly uses recent cognitive-neuroscience research on body ownership illusions. Multisensory data streams that imply a changed body representation are readily interpreted as changes to the body. The most famous example of this is Matthew Botvinik and Jonathan Cohen s rubber-hand illusion. 11 Tapping on a rubber hand on a table in front of a participant while tapping on the participant s corresponding obscured real hand led to the illusion that the rubber hand felt as if it were the participant s real hand. Researchers have applied this technique at the whole-body level using video streaming through HMDs to generate out-of-thebody-type illusions and whole-body transformation illusions. A body transformation illusion can also work strongly in VR. In one example, researchers gave men the illusion that their body was that of a small girl. 12 This research also showed that seeing the virtual body from the first-person perspective contributed critically to the experience of ownership. An important goal of Beaming is to give visitors the strong sensation of ownership regarding their body representation at the destination. Because visitors see their body representation naturally from the first-person perspective, a degree of such ownership will likely occur. In addition, the haptic and visual feedback together provides multisensory feedback that enhances the ownership illusion. In applications in which it s desirable to transform the visitor s representation (for example, in acting rehearsals), such transformations have also been shown to preserve body ownership. A Visitor Proxy We can envisage a future in which some sort of automated proxy partly fulfills people s communication needs. In Beaming, we ve implemented a communication proxy that can not only represent visitors for a short period of time (for example, a phone interruption) but also replace them in a whole meeting or presentation. (For instance, see the video at youtu. be/43l739kkffk.) The communication proxy thus isn t an autonomous virtual agent or a standard virtual avatar. It must be based on models of the owner s behavior and be aware of its owner s goals and preferences. The proxy can learn some of its owner s behaviors by observing the owner s motion and speech. For example, the proxy can learn certain aspects of its owner s body language and gestures. Other behaviors and higher-level goals would need to be explicitly encoded or modeled. An important goal of Beaming is to give visitors the strong sensation of ownership regarding their body representation at the destination. The proxy s implementation presents several challenges. The software must observe both the visitor and the real environment (that is, sensors at the destination). Beaming s software architecture supports this through a loosely coupled heterogeneous platform in which sensors, actuators, and displays collaborate through a shared data service. The proxy then needs traditional AI components such as gesture recognition and speech interpretation. The proxy can also operate in a mixed mode. For example, while representing its owner in a meeting, if it s asked a question it doesn t understand, it can open a voice connection to the owner and relay the question. The owner can answer the question, taking over the representation for a short while before returning control to the proxy. Extending Haptic Feedback for the Visitor We re investigating finger-mounted portable devices that can display the transition between contact and noncontact of the fingers (see Figure 4d). Such devices have three potential advantages: unlike with desk-mounted devices, their range of operation has no intrinsic limits; they re not likely to interfere with the user; and constructing systems with multiple contacts for the fingers is easier. By acknowledging the varied ways in which much collaborative work and socializing occur, Beaming s destination-visitor paradigm could provide an integrated approach to high-quality telecommunication. The paradigm presents interesting technical challenges: high-fidelity real-time reconstruction of the destination and locals in IEEE Computer Graphics and Applications 15
7 Spatial Interfaces multiple modalities and presentation of the visitor in multiple modalities to the locals. Our experience from early example applications and integrations is that participants can understand the situation and exploit the 3D spatial nature of the communication in both directions. This has been true for both the acting application we discussed and a medical application that naturally exploits Beaming s asymmetry. 13 Clearly, the Beaming project and concept raise serious ethical and legal issues. For example, someone could misuse transformation or the proxy to mislead locals, someone in one country could use a robotic embodiment to commit a crime in another country, or a body transformation experience could cause psychological problems. Widespread implementation of this concept would entail profound changes in law and societal relationships. The Beaming project takes these issues seriously, dedicating an entire strand of research to them. Recent advances in camera technology, particularly depth cameras, have made some aspects of the system more easily deployable. We expect significant advances in the ability to visually reconstruct populated spaces in real time in the near future, and we ll exploit these advances to build multimodal representations of the destination. Our next-generation systems will also combine robots with novel types of display to afford more effective representations of the visitor. Acknowledgments The European Commission supports Beaming under the FP7 Information and Communication Technologies Work Programme. Investigators in the project include Martin Buss (Technische Universität München), Antonio Frisoli (Scuola Superiore Sant Anna), Maria Sanchez-Vives (August Pi i Sunyer Biomedical Research Institute), Miriam Reiner (the Technion), Giulio Ruffini (Starlab Barcelona), Matthias Harders and Gábor Székely (Swiss Federal Institute of Technology Zurich), Benjamin Cohen (IBM Haifa Research Lab), and Dorte Hammershøi (Aalborg Universitet). We also thank the associated research fellows and students. Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together, Proc IEEE Conf. Virtual Reality, IEEE CS, 2009, pp A. Voida et al., Asymmetry in Media Spaces, Proc ACM Conf. Computer Supported Cooperative Work (CSCW 08), ACM, 2008, pp M. Chen, Leveraging the Asymmetric Sensitivity of Eye Contact for Videoconference, Proc SIGCHI Conf. Human Factors in Computing Systems (CHI 02), ACM, 2002, pp R. Wolff et al., A Review of Tele-collaboration Technologies with Respect to Closely Coupled Collaboration, Int l J. Computer Applications in Technology, vol. 29, no. 1, 2007, pp S. Benford et al., Shared Spaces: Transportation, Artificiality, and Spatiality, Proc ACM Conf. Computer Supported Cooperative Work (CSCW 96), ACM, 2006, pp R. Schroeder, Being There Together and the Future of Connected Presence, Presence: Teleoperators and Virtual Environments, vol. 15, no. 4, 2006, pp W. Steptoe et al., Acting Rehearsal in Collaborative Multimodal Mixed Reality Environments, to be published in Presence: Teleoperators and Virtual Environments. 8. S. Tachi, Telexistence, World Scientific, J. Kim et al., Transatlantic Touch: A Study of Haptic Collaboration over Long Distance, Presence: Teleoperators and Virtual Environments, vol. 13, no. 3, 2004, pp F. Pece, J. Kautz, and T. Weyrich, Adapting Standard Video Codecs for Depth Streaming, Proc. Joint Virtual Reality Conf. EuroVR (JVRC), Eurographics Assoc., 2011, pp M. Botvinick and J. Cohen, Rubber Hands Feel Touch That Eyes See, Nature, vol. 391, no. 6669, p M. Slater et al., First Person Experience of Body Transfer in Virtual Reality, PLoS ONE, vol. 5, no. 5, 2010, e10564; info%3adoi%2f %2fjournal.pone D. Perez-Marcos et al., A Fully Immersive Set-up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership, Frontiers in Neurology, vol. 3, no. 110, Anthony Steed is the head of the Virtual Environments and Computer Graphics group in the Department of Computer Science at University College London. Contact him at [email protected]. References 1. D. Roberts et al., Communicating Eye-Gaze across a Distance: Comparing an Eye-Gaze Enabled Immersive William Steptoe is a postdoctoral research fellow in the Virtual Environments and Computer Graphics group in the Department of Computer Science at University College London. Contact him at [email protected]. 16 November/December 2012
8 Wole Oyekoya is a postdoctoral research fellow in the Department of Computer Science at University College London. Contact him at [email protected]. Massimiliano Solazzi is an assistant professor in the Portable Haptic Interfaces group at Scuola Superiore Sant Anna. Contact him at [email protected]. Fabrizio Pece is a PhD student in the Virtual Environments and Computer Graphics group in the Department of Computer Science at University College London. Contact him at [email protected]. Tim Weyrich is a senior lecturer in the Virtual Environments and Computer Graphics group in the Department of Computer Science at University College London. Contact him at [email protected]. Jan Kautz is a professor of visual computing in the Department of Computer Science at University College London. Contact him at [email protected]. Franco Tecchia leads the Computer Graphics and Virtual Environments Group at Scuola Superiore Sant Anna. Contact him at [email protected]. Massimo Bergamasco is a full professor of applied mechanics at Scuola Superiore Sant Anna. Contact him at massimo. [email protected]. Mel Slater is a research professor at the Catalan Institution for Research and Advanced Studies at Universitat de Barcelona and a professor of virtual environments at University College London. He s the Beaming technical manager. Contact him at [email protected]. Doron Friedman heads the Advanced Virtuality Lab at Interdisciplinary Center Herzliya. Contact him at [email protected]. Angelika Peer is a senior researcher and lecturer at the Institute of Automatic Control Engineering and Carl von Linde Junior Fellow at the Institute for Advanced Study of Technische Universität München. Contact her at angelika. [email protected]. Contact department editors Bernd Froehlich at bernd. [email protected] and Mark Livingston at mark. [email protected]. Selected CS articles and columns are also available for free at PURPOSE: The IEEE Computer Society is the world s largest association of computing professionals and is the leading provider of technical information in the field. MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEBSITE: Next Board Meeting: 5 6 Nov., New Brunswick, NJ, USA EXECUTIVE COMMITTEE President: John W. Walz* President-Elect: David Alan Grier;* Past President: Sorel Reisman;* VP, Standards Activities: Charlene (Chuck) Walrad; Secretary: Andre Ivanov (2nd VP);* VP, Educational Activities: Elizabeth L. Burd;* VP, Member & Geographic Activities: Sattupathuv Sankaran; VP, Publications: Tom M. Conte (1st VP);* VP, Professional Activities: Paul K. Joannou;* VP, Technical & Conference Activities: Paul R. Croll; Treasurer: James W. Moore, CSDP;* IEEE Division VIII Director: Susan K. (Kathy) Land, CSDP; IEEE Division V Director: James W. Moore, CSDP; 2012 IEEE Division Director VIII Director-Elect: Roger U. Fujii *voting member of the Board of Governors nonvoting member of the Board of Governors BOARD OF GOVERNORS Term Expiring 2012: Elizabeth L. Burd, Thomas M. Conte, Frank E. Ferrante, Jean-Luc Gaudiot, Paul K. Joannou, Luis Kun, James W. Moore, William (Bill) Pitts Term Expiring 2013: Pierre Bourque, Dennis J. Frailey, Atsuhiro Goto, André Ivanov, Dejan S. Milojicic, Paolo Montuschi, Jane Chu Prey, Charlene (Chuck) Walrad EXECUTIVE STAFF Executive Director: Angela R. Burgess; Associate Executive Director, Director, Governance: Anne Marie Kelly; Director, Finance & Accounting: John Miller; Director, Information Technology & Services: Ray Kahn; Director, Membership Development: Violet S. Doan; Director, Products & Services: Evan Butterfield; Director, Sales & Marketing: Chris Jensen COMPUTER SOCIETY OFFICES Washington, D.C.: 2001 L St., Ste. 700, Washington, D.C Phone: Fax: [email protected] Los Alamitos: Los Vaqueros Circle, Los Alamitos, CA Phone: [email protected] MEMBERSHIP & PUBLICATION ORDERS Phone: Fax: [email protected] Asia/Pacific: Watanabe Building, Minami-Aoyama, Minato-ku, Tokyo , Japan Phone: Fax: [email protected] IEEE OFFICERS President: Gordon W. Day; President-Elect: Peter W. Staecker; Past President: Moshe Kam; Secretary: Celia L. Desmond; Treasurer: Harold L. Flescher; President, Standards Association Board of Governors: Steven M. Mills; VP, Educational Activities: Michael R. Lightner; VP, Membership & Geographic Activities: Howard E. Michel; VP, Publication Services & Products: David A. Hodges; VP, Technical Activities: Frederick C. Mintzer; IEEE Division V Director: James W. Moore, CSDP; IEEE Division VIII Director: Susan K. (Kathy) Land, CSDP; IEEE Division VIII Director-Elect: Roger U. Fujii; President, IEEE-USA: James M. Howard revised 22 May 2012 IEEE Computer Graphics and Applications 17
Virtual Environments - Basics -
Virtual Environments - Basics - What Is Virtual Reality? A Web-Based Introduction Version 4 Draft 1, September, 1998 Jerry Isdale http://www.isdale.com/jerry/vr/whatisvr.html Virtual Environments allow
Making Architecture Visible to Improve Flow Management in Lean Software Development
FOCUS: Lean Software Making Architecture Visible to Improve Flow Management in Lean Software Robert L. Nord and Ipek Ozkaya, Carnegie Mellon University Raghvinder S. Sangwan, Pennsylvania State University
A Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology
, pp.411-418 http://dx.doi.org/10.14257/ijca.2014.7.10.38 A Study of Immersive Game Contents System Design and Modeling for Virtual Reality Technology Jung-Yoon Kim 1 and SangHun Nam 2 1 Graduate School
3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte
Aalborg Universitet 3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte Published in: Proceedings of BNAM2012
Eye-contact in Multipoint Videoconferencing
Eye-contact in Multipoint Videoconferencing Birgit Quante and Lothar Mühlbach Heinrich-Hertz-Institut für Nachrichtentechnik Berlin GmbH (HHI) Einsteinufer 37, D-15087 Berlin, Germany, http://www.hhi.de/
1 Introduction. Hyewon Lee 1.1, Jung Ju Choi 1.2, and Sonya S. Kwak 1.3,
, pp.73-79 http://dx.doi.org/10.14257/astl.2015.108.17 A Social Agent, or a Medium?: The Impact of Anthropomorphism of Telepresence Robot s Sound Interface on Perceived Copresence, Telepresence and Social
Modelling 3D Avatar for Virtual Try on
Modelling 3D Avatar for Virtual Try on NADIA MAGNENAT THALMANN DIRECTOR MIRALAB UNIVERSITY OF GENEVA DIRECTOR INSTITUTE FOR MEDIA INNOVATION, NTU, SINGAPORE WWW.MIRALAB.CH/ Creating Digital Humans Vertex
INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal
INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal Research Article ISSN 2277 9140 Virtual conferencing using Artificial
BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality
BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality Henry Fuchs University of North Carolina at Chapel Hill (USA) and NSF Science and Technology Center for Computer Graphics and
Augmented reality (AR) user interfaces have
Editor: Munindar P. Singh [email protected] Augmented Reality Interfaces Mona Singh Independent Consultant Munindar P. Singh North Carolina State University Technological advances, exploding amounts of
VIRTUE The step towards immersive telepresence in virtual video-conference systems
VIRTUE The step towards immersive telepresence in virtual video-conference systems Oliver SCHREER (HHI) 1 and Phil SHEPPARD (British Telecom) 2 1 Heinrich-Hertz-Institut, Einsteinufer 37, D-10587 Berlin,
Logical Data Models for Cloud Computing Architectures
Logical Data Models for Cloud Computing Architectures Augustine (Gus) Samba, Kent State University Describing generic logical data models for two existing cloud computing architectures, the author helps
Intuitive Navigation in an Enormous Virtual Environment
/ International Conference on Artificial Reality and Tele-Existence 98 Intuitive Navigation in an Enormous Virtual Environment Yoshifumi Kitamura Shinji Fukatsu Toshihiro Masaki Fumio Kishino Graduate
Virtual Patients: Assessment of Synthesized Versus Recorded Speech
Virtual Patients: Assessment of Synthesized Versus Recorded Speech Robert Dickerson 1, Kyle Johnsen 1, Andrew Raij 1, Benjamin Lok 1, Amy Stevens 2, Thomas Bernard 3, D. Scott Lind 3 1 Department of Computer
3D U ser I t er aces and Augmented Reality
3D User Interfaces and Augmented Reality Applications Mechanical CAD 3D Animation Virtual Environments Scientific Visualization Mechanical CAD Component design Assembly testingti Mechanical properties
WHITE PAPER Personal Telepresence: The Next Generation of Video Communication. www.vidyo.com 1.866.99.VIDYO
WHITE PAPER Personal Telepresence: The Next Generation of Video Communication www.vidyo.com 1.866.99.VIDYO 2009 Vidyo, Inc. All rights reserved. Vidyo is a registered trademark and VidyoConferencing, VidyoDesktop,
Transforming Geodata for Immersive Visualisation
Transforming Geodata for Immersive Visualisation Transforming Geodata for Immersive Visualisation M. Wiedemann 1 C. Anthes 1 H.-P. Bunge 2 B.S.A. Schuberth 2 D. Kranzlmüller 1 1 Centre for Virtual Reality
EDUCATION THROUGH VIDEO CONFERENCING - AN OVERVIEW
IJMRT Volume 6 Number 1 January-June 2012: 1-6 I J M R T Serials Publications EDUCATION THROUGH VIDEO CONFERENCING - AN OVERVIEW P. Peratchi selvan,* M. Saravanan** & A. Jegadeesh*** Abstract: Videoconferencing
DR AYŞE KÜÇÜKYILMAZ. Imperial College London Personal Robotics Laboratory Department of Electrical and Electronic Engineering SW7 2BT London UK
DR AYŞE KÜÇÜKYILMAZ Imperial College London Personal Robotics Laboratory Department of Electrical and Electronic Engineering SW7 2BT London UK http://home.ku.edu.tr/~akucukyilmaz [email protected]
VidyoPanorama SOLUTION BRIEF. www.vidyo.com 1.866.99.VIDYO
SOLUTION BRIEF VidyoPanorama www.vidyo.com 1.866.99.VIDYO 2011 Vidyo, Inc. All rights reserved. Vidyo and other trademarks used herein are trademarks or registered trademarks of Vidyo, Inc. or their respective
REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4
REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 S. Heymann, A. Smolic, K. Mueller, Y. Guo, J. Rurainsky, P. Eisert, T. Wiegand Fraunhofer Institute
VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY
Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL
Automotive Applications of 3D Laser Scanning Introduction
Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc
Conference interpreting with information and communication technologies experiences from the European Commission DG Interpretation
Jose Esteban Causo, European Commission Conference interpreting with information and communication technologies experiences from the European Commission DG Interpretation 1 Introduction In the European
real time in real space TM connectivity... SOLUTIONS GUIDE
real time in real space TM connectivity... SOLUTIONS GUIDE Ultimate Experience The DVE Immersion Room TM is the most realistic telepresence meeting system ever developed. It was conceived as an entirely
Eye Contact in Leisure Video Conferencing. Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway.
Eye Contact in Leisure Video Conferencing Annick Van der Hoest & Dr. Simon McCallum Gjøvik University College, Norway 19 November 2012 Abstract This paper presents systems which enable eye contact in leisure
Scooter, 3 wheeled cobot North Western University. PERCRO Exoskeleton
Scooter, 3 wheeled cobot North Western University A cobot is a robot for direct physical interaction with a human operator, within a shared workspace PERCRO Exoskeleton Unicycle cobot the simplest possible
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University
Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University Presented by: Harish CS-525 First presentation Abstract This article presents
Quantifying Spatial Presence. Summary
Quantifying Spatial Presence Cedar Riener and Dennis Proffitt Department of Psychology, University of Virginia Keywords: spatial presence, illusions, visual perception Summary The human visual system uses
Interactive Cards A game system in Augmented Reality
Interactive Cards A game system in Augmented Reality João Alexandre Coelho Ferreira, Instituto Superior Técnico Abstract: Augmented Reality can be used on innumerous topics, but the point of this work
Automated Recording of Lectures using the Microsoft Kinect
Automated Recording of Lectures using the Microsoft Kinect Daniel Sailer 1, Karin Weiß 2, Manuel Braun 3, Wilhelm Büchner Hochschule Ostendstraße 3 64319 Pfungstadt, Germany 1 [email protected] 2 [email protected]
Go to contents 18 3D Visualization of Building Services in Virtual Environment
3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer
What is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference
Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best
Steps to Migrating to a Private Cloud
Deploying and Managing Private Clouds The Essentials Series Steps to Migrating to a Private Cloud sponsored by Introduction to Realtime Publishers by Don Jones, Series Editor For several years now, Realtime
CTX OVERVIEW. Ucentrik CTX
CTX FACT SHEET CTX OVERVIEW CTX SDK API enables Independent Developers, VAR s & Systems Integrators and Enterprise Developer Teams to freely and openly integrate real-time audio, video and collaboration
Lecture 2, Human cognition
Human Cognition An important foundation for the design of interfaces is a basic theory of human cognition The information processing paradigm (in its most simple form). Human Information Processing The
VIRTUAL TRIAL ROOM USING AUGMENTED REALITY
VIRTUAL TRIAL ROOM USING AUGMENTED REALITY Shreya Kamani, Neel Vasa, Kriti Srivastava, D. J. Sanghvi College of Engineering, Mumbai 53 Abstract This paper presents a Virtual Trial Room application using
Business Value of Telepresence By: S. Ann Earon, Ph.D., President Telemanagement Resources International Inc.
Business Value of Telepresence By: S. Ann Earon, Ph.D., President Telemanagement Resources International Inc. Introduction Telepresence is what videoconferencing was meant to be: reliable, highly interactive,
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
DR AYŞE KÜÇÜKYILMAZ. Yeditepe University Department of Computer Engineering Kayışdağı Caddesi 34755 Istanbul Turkey
DR AYŞE KÜÇÜKYILMAZ Yeditepe University Department of Computer Engineering Kayışdağı Caddesi 34755 Istanbul Turkey http://cse.yeditepe.edu.tr/~akucukyilmaz [email protected] QUALIFICATIONS
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
Prof. Dr. D. W. Cunningham, Berliner Strasse 35A, 03046 Cottbus, Germany
Curriculum Vitae Prof. Dr. Douglas William Cunningham Work Address: Brandenburg Technical University Cottbus Graphical Systems Department Konrad-Wachsmann-Allee 1 D-03046 Cottbus, Tel: (+49) 355-693816
An Avatar Based Translation System from Arabic Speech to Arabic Sign Language for Deaf People
International Journal of Information Science and Education. ISSN 2231-1262 Volume 2, Number 1 (2012) pp. 13-20 Research India Publications http://www. ripublication.com An Avatar Based Translation System
A Vaddio Whitepaper. Video In Education Paradigm Shift
A Vaddio Whitepaper Video In Education Paradigm Shift Author: S. Ann Earon, Ph.D., President Telemanagement Resources International Inc. Date: June 2011 A Vaddio Whitepaper Video In Education Paradigm
Development of Simulation Tools Software
Development of Simulation Tools Software Vincent Luboz Department of Biosurgery and Surgical Technology Imperial College London BSc VR Surgical Simulation Software Slide 1 Contents Virtual Reality VR Surgical
STMicroelectronics is pleased to present the. SENSational. Attend a FREE One-Day Technical Seminar Near YOU!
SENSational STMicroelectronics is pleased to present the SENSational Seminar Attend a FREE One-Day Technical Seminar Near YOU! Seminar Sensors and the Internet of Things are changing the way we interact
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods
Enhancing Your Device Design Through Tactile Feedback. April 2011
Enhancing Your Device Design Through Tactile Feedback April 2011 Enhancing Your Device Design Through Tactile Feedback April 2011 Page 2 of 7 Table of Contents 1 Enhancing Your Device Design Through Tactile
CHAPTER FIVE RESULT ANALYSIS
CHAPTER FIVE RESULT ANALYSIS 5.1 Chapter Introduction 5.2 Discussion of Results 5.3 Performance Comparisons 5.4 Chapter Summary 61 5.1 Chapter Introduction This chapter outlines the results obtained from
Accuracy in 3D Virtual Worlds Interactive 3D Modeling of the Refractory Linings of Copper Smelters
Accuracy in 3D Virtual Worlds Interactive 3D Modeling of the Refractory Linings of Copper Smelters By Anthony J. Rigby, Kenneth Rigby, and Mark Melaney S ome of the most popular 3D virtual world engines,
Please quote as: Hoffmann, H. & Leimeister, J. M. (2011): Evaluating Application Prototypes in the Automobile. In: IEEE Pervasive Computing,
Please quote as: Hoffmann, H. & Leimeister, J. M. (2011): Evaluating Application Prototypes in the Automobile. In: IEEE Pervasive Computing, Ausgabe/Number: 3, Vol. 10, Erscheinungsjahr/Year: 2011. Seiten/Pages:
1 Introduction. Rhys Causey 1,2, Ronald Baecker 1,2, Kelly Rankin 2, and Peter Wolf 2
epresence Interactive Media: An Open Source elearning Infrastructure and Web Portal for Interactive Webcasting, Videoconferencing, & Rich Media Archiving Rhys Causey 1,2, Ronald Baecker 1,2, Kelly Rankin
Visualization and Simulation for Research and Collaboration. An AVI-SPL Tech Paper. www.avispl.com (+01).866.559.8197
Visualization and Simulation for Research and Collaboration An AVI-SPL Tech Paper www.avispl.com (+01).866.559.8197 1 Tech Paper: Visualization and Simulation for Research and Collaboration (+01).866.559.8197
CONVERSATIONAL behavior as well as understanding
562 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 9, NO. 3, APRIL 2015 withyou An Experimental End-to-End Telepresence System Using Video-Based Reconstruction David J. Roberts, Allen J. Fairchild,
Numbers as pictures: Examples of data visualization from the Business Employment Dynamics program. October 2009
Numbers as pictures: Examples of data visualization from the Business Employment Dynamics program. October 2009 Charles M. Carson 1 1 U.S. Bureau of Labor Statistics, Washington, DC Abstract The Bureau
Evaluation of Virtual Meeting Technologies for Teams and. Communities of Practice in Business Environments. University Canada West Business School
1 Evaluation of Virtual Meeting Technologies for Teams and Communities of Practice in Business Environments Earon Kavanagh, PhD University Canada West Business School Lecturer on Organizational Behavior,
3D Engineering Solutions from Iowa State University CTRE. Center for Transportation Research and Education
VIRTUAL REALITY & LASER SCANNING APPLICATIONS 3D Engineering Solutions from Iowa State University Center for Transportation Research and Education Iowa State University Edward Jaselskis Associate Professor
GestPoint Maestro3D. A White Paper from GestureTek The Inventor of 3D Video Gesture Control
A White Paper from GestureTek The Inventor of 3D Video Gesture Control Table of Contents Executive Summary... 3 The Trackers... 4 GestPoint Maestro3D Hand Tracker... 4 GestPoint Maestro3D Multi-Tracker...
MULTIPOINT VIDEO CALLING
1 A Publication of 2 VIDEO CONFERENCING MADE SIMPLE. TELEMERGE S ALL-IN-ONE VIDEO COLLABORATION Everything you need to enable adoption, right here. Request A Demo Learn More THE FOUR PILLARS Telemerge
Reinventing Virtual Learning: Delivering Hands-On Training using Cloud Computing
Reinventing Virtual Learning: Delivering Hands-On Training using Cloud Computing WHITE PAPER BROUGHT TO YOU BY SKYTAP 2 Reinventing Virtual Learning: Delivering Hands-On Training using Cloud Computing
A Remote Maintenance System with the use of Virtual Reality.
ICAT 2001 December 5-7, Tokyo, JAPAN A Remote Maintenance System with the use of Virtual Reality. Moez BELLAMINE 1), Norihiro ABE 1), Kazuaki TANAKA 1), Hirokazu TAKI 2) 1) Kyushu Institute of Technology,
Example Chapter 08-Number 09: This example demonstrates some simple uses of common canned effects found in popular photo editors to stylize photos.
08 SPSE ch08 2/22/10 11:34 AM Page 156 156 Secrets of ProShow Experts: The Official Guide to Creating Your Best Slide Shows with ProShow Gold and Producer Figure 8.18 Using the same image washed out and
Study of Large-Scale Data Visualization
Study of Large-Scale Data Visualization Project Representative Fumiaki Araki The Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology Authors Fumiaki Araki 1, Shintaro Kawahara
Telepresence for Deep Space Missions Project
ABSTRACT Incorporating telepresence technologies into deep space mission operations can give the crew and ground personnel the impression that they are in a location at time that they are not. NASA s space
Elements of robot assisted test systems
1 (9) Matti Vuori, 2013-12-16 RATA project report Elements of robot assisted test systems Table of contents: 1. General... 2 2. Overall view to the system the elements... 2 3. There are variations for
Chapter 5 Objectives. Chapter 5 Input
Chapter 5 Input Describe two types of input List characteristics of a Identify various types of s Identify various types of pointing devices Chapter 5 Objectives Explain how voice recognition works Understand
Service-Oriented Visualization of Virtual 3D City Models
Service-Oriented Visualization of Virtual 3D City Models Authors: Jan Klimke, Jürgen Döllner Computer Graphics Systems Division Hasso-Plattner-Institut, University of Potsdam, Germany http://www.hpi3d.de
Video Conferencing Display System Sizing and Location
Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed
Pedagogical Use of Tablet PC for Active and Collaborative Learning
Pedagogical Use of Tablet PC for Active and Collaborative Learning Oscar Martinez Bonastre [email protected] Antonio Peñalver Benavent [email protected] Francisco Nortes Belmonte [email protected]
Network Robot Systems
Network Robot Systems Alberto Sanfeliu 1, Norihiro Hagita 2 and Alessandro Saffiotti 3 Guest Editors of the Special Issue on NRS 1 Institut de Robòtica I Informàtica Industrial (UPC-CSIC), Universitat
