Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments



Similar documents
Fixation Biases towards the Index Finger in Almost-Natural Grasping

Under 20 of rotation the accuracy of data collected by 2D software is maintained?

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY

Michael Cline. University of British Columbia. Vancouver, British Columbia. bimanual user interface.

Intuitive Navigation in an Enormous Virtual Environment

How To Use Eye Tracking With A Dual Eye Tracking System In A Collaborative Collaborative Eye Tracking (Duet)

Animations in Creo 3.0

SAM PuttLab. Reports Manual. Version 5

Research-Grade Research-Grade. Capture

Projection Center Calibration for a Co-located Projector Camera System

Mouse Control using a Web Camera based on Colour Detection

Experiments with a Camera-Based Human-Computer Interface System

A Novel Multitouch Interface for 3D Object Manipulation

Solving Simultaneous Equations and Matrices

Video Conferencing Display System Sizing and Location

3D Interactive Information Visualization: Guidelines from experience and analysis of applications

A comparison of visuomotor cue integration strategies for object placement and prehension

Technical Drawing Specifications Resource A guide to support VCE Visual Communication Design study design

MMVR: 10, January On Defining Metrics for Assessing Laparoscopic Surgical Skills in a Virtual Training Environment

Centripetal Force. This result is independent of the size of r. A full circle has 2π rad, and 360 deg = 2π rad.

Usability Testing Jeliot 3- Program Visualization Tool: Evaluation Using Eye-Movement Tracking

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR

Reflection and Refraction

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Robot Perception Continued

Tracking devices. Important features. 6 Degrees of freedom. Mechanical devices. Types. Virtual Reality Technology and Programming

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

CREATE A 3D MOVIE IN DIRECTOR

The purposes of this experiment are to test Faraday's Law qualitatively and to test Lenz's Law.

Number Sense and Operations

Reduction of the flash-lag effect in terms of active observation

Uniformly Accelerated Motion

Compensation for Interaction Torques During Single- and Multijoint Limb Movement

Pro/ENGINEER Wildfire 4.0 Basic Design

COEFFICIENT OF KINETIC FRICTION

Loss Prevention Reference Note. Adjusting the Computer Workstation. Glare Viewing Distance. Line of Sight Neck Posture Arm Posture Back Posture

Explore 3: Crash Test Dummies

Scooter, 3 wheeled cobot North Western University. PERCRO Exoskeleton

EXPERIMENTAL ERROR AND DATA ANALYSIS

AN ERGONOMICS STUDY OF MENU-OPERATION ON MOBILE PHONE INTERFACE

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

ASSESSMENT OF VISUALIZATION SOFTWARE FOR SUPPORT OF CONSTRUCTION SITE INSPECTION TASKS USING DATA COLLECTED FROM REALITY CAPTURE TECHNOLOGIES

Laboratory Report Scoring and Cover Sheet

Virtual CRASH 3.0 Staging a Car Crash

STATIC AND KINETIC FRICTION

Optical Illusions Essay Angela Wall EMAT 6690

Video-Based Eye Tracking

House Design Tutorial

Effective Use of Android Sensors Based on Visualization of Sensor Information

International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in Office Applications Light Measurement & Quality Parameters

ME Week 11 Introduction to Dynamic Simulation

Renewable Energy. Solar Power. Courseware Sample F0

Selection and Zooming using Android Phone in a 3D Virtual Reality

CATIA Functional Tolerancing & Annotation TABLE OF CONTENTS

CATIA V5 Tutorials. Mechanism Design & Animation. Release 18. Nader G. Zamani. University of Windsor. Jonathan M. Weaver. University of Detroit Mercy

How To Create A Surface From Points On A Computer With A Marching Cube

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision

Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System

Volume of Right Prisms Objective To provide experiences with using a formula for the volume of right prisms.

Maya 2014 Basic Animation & The Graph Editor

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques

Designing a Poster using MS-PowerPoint

The process components and related data characteristics addressed in this document are:

4. CAMERA ADJUSTMENTS

How To Check For Differences In The One Way Anova

Frequency, definition Modifiability, existence of multiple operations & strategies

Affordable Immersive Projection System for 3D Interaction

MSc in Autonomous Robotics Engineering University of York

Solar radiation data validation

Designing Effective Web Sites: How Academic Research Influences Practice

Freehand Sketching. Sections

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Polarization of Light

AN INVESTIGATION INTO THE USEFULNESS OF THE ISOCS MATHEMATICAL EFFICIENCY CALIBRATION FOR LARGE RECTANGULAR 3 x5 x16 NAI DETECTORS

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Applying Virtual Reality Techniques to the Interactive Stress Analysis of a Tractor Lift Arm

HDTV: A challenge to traditional video conferencing?

Fixplot Instruction Manual. (data plotting program)

An Experimental Study on Pixy CMUcam5 Vision Sensor

Principles of Good Screen Design in Websites

Manual for simulation of EB processing. Software ModeRTL

An introduction to 3D draughting & solid modelling using AutoCAD

ε: Voltage output of Signal Generator (also called the Source voltage or Applied

Keywords: upper-extremity posture, computer mouse, left hand

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners

HC4300 Portable Flame/Plasma Cutting Machine Control System User Manual

REGISTRATION OF 3D ULTRASOUND IMAGES TO SURFACE MODELS OF THE HEART

CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS

Chapter 6: Constructing and Interpreting Graphic Displays of Behavioral Data

Interference to Hearing Aids by Digital Mobile Telephones Operating in the 1800 MHz Band.

High Resolution Spatial Electroluminescence Imaging of Photovoltaic Modules

Quantifying Spatial Presence. Summary

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

Experiment: Static and Kinetic Friction

Onboard electronics of UAVs

This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model

Chapter. 4 Mechanism Design and Analysis

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

Transcription:

Human Computer Interaction INTERACT 99 Angela Sasse and Chris Johnson (Editors) Published by IOS Press, c IFIP TC.13, 1999 1 Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments Yanqing Wang & Christine L MacKenzie School of Kinesiology, Simon Fraser University, Burnaby, BC V5A 1S6, Canada. wangy,cmackenz @move.kines.sfu.ca Abstract: Spatial disparity between a physical object and its visual presentation is very common in virtual environments. An experiment was conducted to systematically investigate the effects of orientation disparity between haptic and graphic displays of objects on human performance in a virtual environment. It was found that human performance was optimum with no orientation disparity. Orientation disparity had significant effects on the object orientation component, but did not appear to affect the object transportation component of human movements. It appeared that, when the orientation disparity was present, the orientation speed relied more on the haptic display of objects while the orientation accuracy depended more on the graphic display of objects. Implications for human-computer interface design were also discussed. Keywords: human performance, virtual reality, object transportation, object orientation, object docking, 3D graphics, user interface, haptic feedback, visual feedback, visual dominance. 1 Introduction Objects in the real world are a unitary whole, that is, haptic and graphic displays are spatially consistent. Haptic displays are perceived by the hand in the control space, and include the physical characteristics (e.g. shape and size) of an object. Graphic displays of an object are perceived by the eyes. For object manipulation in the real world, what you see is generally consistent with what you feel. Humans perform daily activities in an environment where haptic and graphic displays of objects are completely superimposed. However, this is hardly the case in humancomputer interaction (HCI). In a typical humancomputer interaction setup, the control space of the hand is separate from the display space of the objects, where what a user feels with her hand is not the same with what she sees with her eyes. For example, in a desktop HCI situation, the movement of a mouse on a horizontal plane is transformed to the movement of a cursor on a vertical screen. A cursor is a visual presentation of the mouse, but with different shape, size, location and orientation from the mouse. The mouse and the cursor are not spatially superimposed. In other words, haptic display of a mouse is different from its graphic display, a graphic cursor (Graham & MacKenzie, 1996). We use the term disparity in this study to refer to the spatial difference between haptic and graphic displays of objects. Disparity between haptic and graphic displays is an important feature that distinguishes most virtual environments from real world environments. In human-computer interaction applications, the graphic object being manipulated by a physical controller rarely has the same characteristics (e.g. shape, size, location and orientation) as the controller (Wang & MacKenzie, 1999; Wang et al., 1998; Ware, 1998; Ware, 1990; Zhai & Milgram, 1997). Disparity between haptic and graphic displays can have significant effects on human performance in virtual environments. The purpose of this experiment is to investigate how the disparity between haptic and graphic displays affects human object manipulation in virtual environments, and to provide further insight into HCI design. 1.1 Previous Research Effects of disparity between haptic and graphic displays have been rarely studied. An experiment was conducted by Wang & MacKenzie (1999) to investigate the effects of relative size among the

2 Human Computer Interaction INTERACT 99 controller, cursor and target on docking tasks in virtual environments. The difference between the controller and cursor sizes indicated the effects of size disparity between haptic and graphic displays. They found that human performance was better when the controller and cursor had the same size, that is, when the haptic and graphic displays were superimposed. Graham & MacKenzie (1996) conducted experiments to examine human pointing performance under different relationships between the display space and control space. They found that users generally achieved better performance when the display space and control space were superimposed than other conditions. Their results can be related to the effects of transportation disparity between haptic and graphic displays of objects since only object transportation was required in their experiments. Ware (1998; 1990) noticed that object transportation and orientation in virtual environments were much slower than in the real world. He suggested that it could be the object orientation components that slowed down the object manipulation process in virtual environments. We are unaware of research in the literature that systematically studied the effects of orientation disparity between haptic and graphic displays on object manipulation in virtual environments. This disparity may play an important role in human performance and is the focus of this study. 1.2 Research Hypotheses An experiment was conducted to investigate the effects of orientation disparity between object haptic and graphic displays on object transportation and orientation in virtual environments. Two research hypotheses were proposed: 1. It was predicted that human performance would be optimum under the no orientation disparity condition, when the haptic and graphic displays were superimposed. When no disparity was presented, humans could take advantage of easily transferring their object manipulation skills from the real world into the virtual world. This hypothesis was suggested by the converging evidence from previous research (Graham & MacKenzie, 1996; Wang & MacKenzie, 1999; Wang et al., 1998). 2. Another hypothesis was that the orientation disparity between haptic and graphic displays of an object would not only affect the orientation process but also the transportation process. Previous research showed that object transportation and orientation processes interacted with each other, suggesting an interdependent structure (Wang et al., 1998). Even though orientation disparity can be considered as an input to the object orientation process, it may still affect the output of the object transportation process. At the same time, it was expected that the disparity would affect the orientation process more than the transportation process. 2 Method 2.1 Subjects Eight university students volunteered to participate this experiment. Each subject was paid $20 for a two-hour session. All subjects wereright-handed, and had normal or corrected-to-normal vision. Informed consent was provided before the experimental session. 2.2 Experimental Apparatus This experiment was conducted in the Virtual Hand Laboratory at Simon Fraser University. The experimental setup is shown in Figures 1a & b. A Silicon Graphics Indigo RGB monitor was set upside down on the top of a cart. A mirror was placed parallel to the monitor screen and the table surface. A stereoscopic, head-coupled graphical display was presented on the screen and was reflected by the mirror. The image was perceived by the subject as if it was below the mirror, on the table surface. The subject was wearing CrystalEYES Goggles to obtain a stereoscopic view. Three infrared markers (IREDs) were fixed to the side frame of the goggles and their positions were monitored with an OPTOTRAK motion analysis system (Northern Digital, Inc.) with 0.2mm accuracy to provide a head-coupled view in a 3D space. The subject held a wooden cube (30 mm) on the table surface. Three IREDs were placed on the top of the wooden cube, driving a six degree-of-freedom (DOF) wire-frame graphic cube with 1 frame lag. The target was a wire-frame graphic cube that appeared on the table surface to the subject. The graphic six DOF cube and the target cube had the same size as the wooden cube, 30mm. The stereoscopic, head-coupled, graphic display was updated at 60Hz with 1 frame lag of OPTOTRAK coordinates. Data from the OPTOTRAK were sampled and recorded at 60Hz by a Silicon Graphics Indigo Extreme computer workstation. A thin physical L-frame (not shown in the figure) was used to locate the starting position of the wooden cube, at the beginning of each trial. The experiment was conducted in a semi-dark room. The subject saw the target cube and the six DOF cube presented on the mirror, but was unable to see the physical controller

Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments 3 (a) The Virtual Hand Laboratory setup. Shown in schematic is a wooden cube (solid line) and a graphic target cube (dashed line). (b) Top view of the experiment setup. Two graphic cubes (dashed line) were perceived during the experiment. The graphic cube further away from the subject was the target cube. The graphic cube closer to the subject was the graphic display of the wooden cube (solid line) where there were 30 degrees of orientation disparity between them. Subject could feel the wooden cube in hand, but could not see the wooden cube. Figure 1: cube and the hand. The wooden cube was referred as the physical object, the six-degree-of-freedom graphic cube as the graphic object, and the target cube as the graphic target. The task was to match either the physical object or the graphic object to the graphic target as fast and accurately as possible. 2.3 Experimental Design The centre of the graphic object was always superimposed with the physical object (Figure 1b). In one experimental condition, the graphic object was oriented 30 degrees clockwise from the physical object around their common centre (vertical axis), therefore generating the orientation disparity between haptic display and graphic display. In another condition, the graphic object and physical object were totally aligned, with no orientation disparity. The graphic target was located at either 100mm or 200mm away from the starting position of the physical object. In the no disparity condition, the graphic target was oriented 30 degrees clockwise; in the disparity condition, the target was oriented 60 degrees clockwise. This target angle arrangement guaranteed that the physical object was always oriented 30 degrees to match the target orientation regardless of disparity or no disparity condition so that the results in different disparity were comparable. Subjects were asked to perform two kind of tasks: physical match and graphic match. Physical match was to match the physical object to the location and orientation of the graphic target according to the haptic information felt with the hand. Graphic match was to match the graphic object to the graphic target based on what they saw with their eyes. In all conditions, subjects saw the graphic object and graphic target; they did not see the physical object or the hand. In summary, this was a balanced experimental design with repeated measures on eight subjects: 2 Task conditions 2 disparity conditions 2 distances. Dependent variables were derived from OPTOTRAK 3-D position data collected from two IREDs on the top of the physical object (wooden cube). Data from the IRED on the top centre of the physical object were used for object transportation measures, and two IREDs on the top of the physical object were used to calculate the angular value for object orientation measures. Dependent variables included four temporal measures and four spatial error measures. The temporal measures were: task completion time (CT), object transportation time (TT), object orientation time (OT), ratio (R) between orientation time (OT) and transportation time (TT). The spatial error measures included: constant distance

4 Human Computer Interaction INTERACT 99 error (CED), variable distance error (VED), constant angle error (CEA), and variable angle error (VEA). 2.4 Experimental Procedure In each experimental session, individual eye positions were calibrated relative to the IREDs on the goggles to provide a customized stereoscopic, head-coupled view. The table surface was calibrated and the relative orientation between the physical object and the graphic object was determined. A subject was comfortably seated at a table, with forearm at approximately the same height as the table surface. The subject held the physical object with the right hand, with the thumb and index finger in pad opposition on the centre of opposing cube faces which were parallel to the frontal plane of the body. The subject was instructed to match either the physical object or the graphic object to the location and angle of the graphic target as fast and accurately as possible. To start a trial, subjects placed the physical cube at a start position on the table surface. A graphic target appeared at one of two distances and two angles. The subject made either physical match or graphic match as quickly and accurately as possible. When the subject was satisfied with the match, he/she held the controller still and said OK to end that trial. Trials were blocked by task and disparity conditions. The target location and angles were randomly generated. At the beginning of each block of trials, subjects were given 20 trials for practice. 2.5 Data Analysis Data were filtered with a 7Hz low-pass second-order bi-directional Butterworth digital filter to remove digital sampling artifacts, vibrations of the markers, and tremor from the hand movement. Original IRED 3D position data were interpolated and filtered once only, and then were used for the following data manipulation including angular data generation. A computer program determining the start and end of a pointing movement was used separately for the transportation and orientation processes, based on criterion velocities. The start and end of each process were then confirmed by visually inspecting a graph of the velocity profile. A trial was rejected if the program failed to find a start and end or there was disagreement between experimenter s visual inspection and computer s results. For dependent measures, ANOVAs were performed on the balanced design of 2 Task conditions * 2 disparity conditions * 2 distances with repeated measures on all three factors. 3 Results 3.1 Temporal Measures In all experimental conditions, the relative time courses between transportation and orientation processes were similar. The transportation process contained the orientation process. These results confirm the findings in our previous experiments (Wang & MacKenzie, 1999; Wang et al., 1998). The average task completion time (CT) across all condition was 893ms. Shown in Figure 2, CT increased with target distance, F 1 7µ 659 78 p 0 001 807ms at 100mm and 980ms at 200mm. There was no other significant effect. CT (ms) TT (ms) Task completion time (CT) 1000 900 800 700 600 500 100mm 200mm Figure 2: Task completion time. 1000 900 800 700 600 500 100mm Transportation time (TT) 200mm Figure 3: Object transportation time. The transportation time (TT) had an average value of 868ms, taking up 97.2% of the task completion time (CT). Statistics of TT were similar to those of CT. TT increased from 776ms at 100mm to 959ms at 200mm, F 1 7µ 617 13 p 0 001, as shown in Figure 3. No other significant effects were found. Disparity between haptic and visual information in orientation did not appear to have effects on TT. It appeared that

Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments 5 1000 Orientation time (OT) 1000 Orientation time (OT) 900 900 OT (ms) 800 700 600 500 100mm 200mm OT (ms) 800 700 600 500 0 30 Physical Graphic Figure 4: Object orientation time. once visual information was presented, the subject could easily transport either the physical or the graphic object to the target location. The orientation time (OT) was 612ms on average, 70.4% of the task completion time (CT). Distance had a significant effect also on OT, F 1 7µ 56 91 p 0 001. OT was 572ms at 100mm and increased to 651ms at 200mm (Figure 4a). There was a significant interaction between orientation disparity and tasks using haptic information and visual information, F 1 7µ 13 95 p 0 005. As shown in Figure 4b, OT of 661ms was longest when the task was to match the graphic object to the target when there was disparity between the haptic and graphic displays. In contrast, for the other three conditions, OT values were very close, around 600ms. When there was no disparity, it took 601ms to match the physical object to the target, and 598ms to match the graphic object to the target. Those values were close to 586ms where there was disparity, but the task was to match the physical object to the target. Thus, subjects took extra time to orient the graphic object if there was disparity between haptic and visual information. It also suggested that the subjects could successfully disregard the discrepant visual information to achieve a fast orientation of physical object. It is interesting to note that the orientation disparity between haptic and graphic displays only affected the orientation process, OT, but not the transportation process, TT. The ratio (R) of OT against TT, shown in Figures 5a & b, is indicative of the structure of object transportation and orientation. R represents the proportional time of simultaneous control of the orientation and transportation processes. R reduced significantly with increases in target distance (F 1 7µ 39 14 p 0 001µ, 0.78 at 100mm and 0.70 at 200mm (Figure 5a). There was a significant interaction between disparity and task conditions, F 1 7µ 7 97 p 0 026, as shown in Figure 5b. When the disparity was presented, R was 0.80 for graphic to graphic match, and was 0.69 for physical to graphic match. In case of no disparity, R was 0.73 for graphic to graphic match and 0.74 for physical to graphic match. The subjects took less time to integrate the orientation process into the transportation process when they were asked to use the haptic information than the visual information. 3.2 Spatial Error Measures The average constant distance error (CED) wasvery small, 0.13mm, and was not significantly different from 0. The average constant angle error (CEA) was 0.96 degree, not significantly different from specified target angles. This result reflects that system errors of the Virtual Hand setup used for this experiment were minimal and did not cause significant performance bias. Task conditions had main effects on the variable distance error (VED), F 1 7µ 6 34 p 0 040. As shown in Figure 6, VED was 1.5 mm for graphic to graphic match, increasing to 2.5mm for physical to graphic match. This result showed that subjects achieved better accuracy in object transportation by using visual information than by using haptic information. The disparity between haptic and graphic displays had no significant effects on VED. Similar to the TT measure, the orientation disparity between haptic and graphic displays did not affect the spatial error of the transportation process. This might be due to the fact that the centre of the physical object

6 Human Computer Interaction INTERACT 99 Ratio of Orientation time (OT)/Transportation time (TT) 1.0 Ratio of Orientation time (OT)/Transportation time (TT) 1.0 OT/TT 0.8 OT/TT 0.8 Physical Graphic 0.6 100mm 200mm 0.6 0 30 Figure 5: Ratio of orientation time against transportation time. was required to be superimposed with that of the graphic object, regardless of whether or not there was orientation disparity. VED (mm) VEA (deg.) 8 6 4 2 Variable distance error (VED) 0 Physical 8 6 4 2 Graphic Figure 6: Variable distance errors. 0 0 Variable angle error (VEA) 30 Figure 7: Variable angle errors. Physical Graphic The variable angle error (VEA) was 3.7 degrees on average. There was a significant two-way interaction between disparity and task conditions, F 1 7µ 21 22 p 0 003. As shown in Figure 7, the greatest VEA of 7.4 degrees occurred for physical to graphic match when there was disparity. For the other three conditions, VEAs were similar: with no disparity, 2.5 degrees for physical to graphic match and 2.3 degrees for graphic to graphic match; with orientation disparity, VEA was 2.4 degreesforgraphic to graphic match. Patterns of VEA data in Figure 7, are opposite to those of OT data in Figure 5b. It should also be noted that distance had no significant effect on VEA. 4 Discussion Results from this experiment supported our first hypothesis: human performance was better when there was no spatial orientation disparity between haptic and graphic displays of objects. With no disparity, there was no difference in object manipulation between subject using haptic information and using visual information about objects. It is not clear, in the case of no disparity, which information was actually used for the subject to perform the task. Based on the theory of visual dominance (Posner et al., 1976), visual information may be the primary source guiding object manipulation. We suggest that both haptic and graphic displays played a role in object manipulation (Wang & MacKenzie, 1999), and therefore it was the consistency between haptic and graphic displays that resulted in optimum human performance. Haptic and graphic displays are superimposed in the real world. Thus, natural object manipulation in the real world generally yields optimum performance, compared with that in the virtual environment. This

Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments 7 suggests advantages of an augmented environment which has graphic displays superimposed on physical objects. Our second hypothesis about human performance under conditions of orientation disparity between haptic and graphic displays was not supported. The disparity only affected the orientation process, but not the transportation process. If the disparity is considered as an input for the orientation process, it only influences the output of the orientation process. In other words, the transportation process appears to be independent of the orientation disparity between haptic and graphic displays. This result was not predicted by our hypothesis. However, the result supported the hypothesis that the orientation disparity would affect the transportation and orientation differently. The orientation disparity between haptic and graphic displays only affected the orientation process of object manipulation. Object transportation and orientation had a parallel, interdependent structure, consistent with previous experiments (Wang & MacKenzie, 1999; Wang et al., 1998). Specifically, the time course of transportation contained that of orientation. The change in target distances affected both TT and OT. The ratio between OT and TT suggested that haptic information might be very important to integrate the orientation process with the transportation process. This experiment showed that certain factors (e.g. orientation disparity) only affected one component of object manipulation processes while other factors (e.g. target distance) affected both processes. To identify those factors may provide further insight into the underlying mechanisms for object manipulation in virtual environments. The above findings provide important implications for HCI design. If interaction tasks involve only object transportation, the graphic object can be designed with arbitrary orientation in relation to the controller. In current 2D graphic user interfaces, for example, the graphic arrow cursor usually has a fixed orientation of 45 degrees, regardless of mouse orientation. This orientation design has no effect on human performance since the cursor is only required to do translation movements. However, if tasks require object rotation such as multidimensional manipulation in virtual environments, then the orientation of a 3D cursor relative to the controller will be critical. Our evidence indicates that user s performance will be better if the orientation of the cursor is properly aligned with that of the controller. The orientation time was shorter for physical to graphic matches than for graphic to graphic matches when the disparity was present. This indicates that the subject was able to make use of haptic information to facilitate object orientation speed. The evidence supports our suggestions that the subject may use both visual and haptic information to perform manipulation, that is, visual dominance does not mean that visual information totally overwrites the haptic information presented to the subject. The extra orientation time for graphic to graphic matches when there was disparity could be because the visual feedback process was somehow interrupted by the disparity between haptic and graphic displays. Spatial errors measures, however, demonstrated quite a different picture from the temporal measures in terms of the disparity effect. The variable angle error was much smaller for graphic to graphic match than that for physical to graphic match. This indicates that spatial accuracy of object orientation is generally determined by the visual information regardless of the haptic information. When subjects were asked only to use the haptic information and discard the visual information, significant spatial uncertainty occurred. The increase in orientation errors for physical to graphic matches may be attributed to the interference from the inconsistent visual information. Altogether, the results suggest that haptic information and visual information may affect different aspects of orientation process. Haptic information may be more related to manipulation speed while visual information more related to manipulation accuracy. Accordingly, in virtual environment design, consideration should be given to the realism of haptic display when fast manipulation is required such as gaming; in contrast, if accuracy is the main concern, the quality of graphic display should be emphasized. 5 Conclusions We conclude: 1. Humans can achieve optimum manipulation performance when haptic and graphic displays of objects are superimposed. 2. Disparity in orientation between haptic and graphic displays of objects appears to have no significant effects on object transportation processes. 3. Disparity in orientation between haptic and graphic displays of objects significantly affects the object orientation process, increasing the orientation time for graphic to graphic matches and the spatial errors for physical to graphic matches.

8 Human Computer Interaction INTERACT 99 4. It appears that speed of object manipulation relies more on the haptic information while accuracy of object orientation depends more on visual information. Acknowledgements We would like to thank Valerie A Summers for her help in software design for the Virtual Hand Laboratory. References Graham, E. D. & MacKenzie, C. L. (1996), Physical Versus Virtual Pointing, in G. van der Veer & B. Nardi (eds.), Proceedings of CHI 96: Human Factors in Computing Systems, ACM Press, pp.292 9. Posner, M. I., Nissen, M. J. & Klein, R. (1976), Visual Dominance: An Information-processing Account of its Origins and Significance, Psychological Review 83(***NUMBER***), 157 71. Wang, Y. & MacKenzie, C. L. (1999), Object Manipulation in Virtual Environments: Relative Size Matters, in ***EDITOR*** (ed.), Proceedings of CHI 99: Human Factors in Computing Systems, ACM Press, p.***pages***. Wang, Y., MacKenzie, C. L., Summers, V. & Booth, K. S. (1998), The Structure of Object Transportation and Orientation in Human Computer Interaction, in ***EDITOR*** (ed.), Proceedings of CHI 98: Human Factors in Computing Systems, ACM Press, pp.312 9. Ware, C. (1990), Using Hand Position for Virtual Object Placement, The Visual Computer 6(***NUMBER***), 245 53. Ware, C. (1998), Real Handles, Virtual Images, in ***EDITOR*** (ed.), Proceedings of CHI 98: Human Factors in Computing Systems, ACM Press, pp.235 6. Zhai, S. & Milgram, P. (1997), Anisotropic Human Performance in Six -of-freedom Tracking: An Evaluation of Three-Dimensional Display and Control Interfaces, IEEE Transactions in Systems, Man and Cybernetics Part A: Systems and Humans 27(***NUMBER***), 518 28.

Human Computer Interaction INTERACT 99 Angela Sasse and Chris Johnson (Editors) Published by IOS Press, c IFIP TC.13, 1999 9 Author Index MacKenzie, Christine L, 1 Wang, Yanqing, 1

10 Human Computer Interaction INTERACT 99 Angela Sasse and Chris Johnson (Editors) Published by IOS Press, c IFIP TC.13, 1999 Keyword Index 3D graphics, 1 haptic feedback, 1 human performance, 1 object docking, 1 object orientation, 1 object transportation, 1 user interface, 1 virtual reality, 1 visual dominance, 1 visual feedback, 1