Virtual Window for Improved Depth Perception in Medical AR



Similar documents
Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques

Development of Educational System for Automotive Engineering based on Augmented Reality

3D U ser I t er aces and Augmented Reality

A Short Introduction to Computer Graphics

BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality

Benefits of augmented reality function for laparoscopic and endoscopic surgical robot systems

A Prototype For Eye-Gaze Corrected

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801

Augmented Architectural Environments

Template-based Eye and Mouth Detection for 3D Video Conferencing

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

How To Use Eye Tracking With A Dual Eye Tracking System In A Collaborative Collaborative Eye Tracking (Duet)

The Limits of Human Vision

Virtual Environments - Basics -

Virtuelle Realität. Overview. Teil 5: Visual Displays. Prof. Bernhard Jung. Virtuelle Realität

Introduction to Computer Graphics

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

AnatomyBrowser: A Framework for Integration of Medical Information

Colorado School of Mines Computer Vision Professor William Hoff

Interior Design in Augmented Reality Environment

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

Interactive 3D Medical Visualization: A Parallel Approach to Surface Rendering 3D Medical Data

Face Model Fitting on Low Resolution Images

Space Perception and Binocular Vision

REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS

Augmented Reality to Supplement Work Instructions. Model-Based Enterprise Summit 2013 December 18-19, 2013, Gaithersburg, MD Rafael Radkowski

Multivariate data visualization using shadow

INTRODUCTION TO RENDERING TECHNIQUES

Develop Computer Animation

Viewer sees picture in 3D. Array of cameras Video multiplexer Autostereoscopic 3D display ❻ ❺ ❹ ❶ ❷❸

The Olympus stereology system. The Computer Assisted Stereological Toolbox

Tracking devices. Important features. 6 Degrees of freedom. Mechanical devices. Types. Virtual Reality Technology and Programming

Sweet Home 3D user's guide

Real-Time Augmented Reality for Image-Guided Interventions

Volume visualization I Elvins

NVIDIA IndeX Enabling Interactive and Scalable Visualization for Large Data Marc Nienhaus, NVIDIA IndeX Engineering Manager and Chief Architect

Image Synthesis. Transparency. computer graphics & visualization

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

Modelling 3D Avatar for Virtual Try on

STATE OF THE ART OF THE METHODS FOR STATIC 3D SCANNING OF PARTIAL OR FULL HUMAN BODY

Automatic Labeling of Lane Markings for Autonomous Vehicles

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

Wii Remote Calibration Using the Sensor Bar

Architecture for Direct Model-to-Part CNC Manufacturing

SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM

Projection Center Calibration for a Co-located Projector Camera System

How To Use Trackeye

1. INTRODUCTION Graphics 2

Real-time Visual Tracker by Stream Processing

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Affine Object Representations for Calibration-Free Augmented Reality

3D Modeling Using Stereo Projection

Feasibility of an Augmented Reality-Based Approach to Driving Simulation

Mobile Application Design of Augmented Reality Digital Pet

CS231M Project Report - Automated Real-Time Face Tracking and Blending

PRODUCT SHEET.

How To Develop An Image Guided Software Toolkit

V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender

Mediated Reality using Computer Graphics Hardware for Computer Vision

Total Solutions. Best NOMOS One Best Drive, Pittsburgh, PA USA phone NOMOS

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.

MODERN VOXEL BASED DATA AND GEOMETRY ANALYSIS SOFTWARE TOOLS FOR INDUSTRIAL CT

Optical Tracking Using Projective Invariant Marker Pattern Properties

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Chapter 8: Perceiving Depth and Size

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

Advanced Volume Rendering Techniques for Medical Applications

Introduction.

ACE: After Effects CS6

Segmentation of building models from dense 3D point-clouds

multimodality image processing workstation Visualizing your SPECT-CT-PET-MRI images

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Scanners and How to Use Them

Go to contents 18 3D Visualization of Building Services in Virtual Environment

T-REDSPEED White paper

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Visualization of 2D Domains

AR Interfaces and Interaction

Spatial location in 360 of reference points over an object by using stereo vision

Development of Simulation Tools Software

Introduction. C 2009 John Wiley & Sons, Ltd

Real time vehicle detection and tracking on multiple lanes

Quantifying Spatial Presence. Summary

Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments

Interactive Level-Set Segmentation on the GPU

Sony Releases the Transparent Lens Eyewear SmartEyeglass Developer Edition

Maya 2014 Basic Animation & The Graph Editor

Off-line programming of industrial robots using co-located environments

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

A System for Capturing High Resolution Images

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Authors: Masahiro Watanabe*, Motoi Okuda**, Teruo Matsuzawa*** Speaker: Masahiro Watanabe**

AR-media Player v2.3. INSTALLATION & USER GUIDE (February, 2013) (Windows XP/Vista/7)

Analecta Vol. 8, No. 2 ISSN

The Evolution of Computer Graphics. SVP, Content & Technology, NVIDIA

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

Transcription:

Virtual Window for Improved Depth Perception in Medical AR Christoph Bichlmeier Nassir Navab Chair for Computer Aided Medical Procedures (CAMP), TU Munich, Germany A BSTRACT Real-time in-situ visualization has been a subject of intensive research and development during the last decade [2], [6], [8]. Besides accuracy and speed of the systems one of the challenges to improve acceptance of medical AR is to overcome the misleading depth perception caused by superimposed virtual entities of the AR scene onto real imagery, e.g. virtual tissue and bones occlude real skin. Occlusion is the most effective depth cue [3] and let e.g. the visualized spinal column appear in front of the real skin. We present a technique to tackle this problem. A virtual window overlaid onto the real skin of the patient creates the feeling of getting a view on the inside of the patient. This view is restricted by the frame of the window, however, due to motion of the observer the frame covers and uncovers fragments of the visualized bones and tissue and enables the depth cues motion parallax and occlusion, which rectify the perceptive misinformation. An earlier experiment has shown the perceptive advantage of the window. Therefore seven different visualization modes for the spinal column were evaluated regarding depth perception. This paper describes the technical realization of the window. Keywords: augmented reality, depth perception, proprioception, HMD 1 I NTRODUCTION Real-time in-situ visualization of medical data is getting increasing attention. Watching a stack of radiography is time and space consuming within the firm work flow in an OR. Physicians have to associate the imagery of anatomical regions with their proper position on the patient. Medical augmented reality allows for the examination of medical imagery like radiography right on the patient. Three dimensional visualizations can be observed by moving with a head mounted display HMD around the AR scene. Several systems [9, 2, 5] that are custom made for medical procedures tend to meet the requirements for accuracy and to integrate their display devices seamlessly into the operational work flow. Also depth perception has become a major issue of current research in medical AR. Virtual data is superimposed on real imagery and visual depth perception is disturbed as shown in figure 1. This problem e-mail: e-mail: bichlmei@in.tum.de navab@cs.tum.edu Figure 1: Opaque surface model occludes real thorax. Therefore it is perceived in front of the body although the vertebrae is positioned correctly. has been identified as early as 14 years ago in the first publication about medical augmented reality [1]. This group tasked the problem by rendering a synthetic hole... around ultrasound images in an attempt to avoid conflicting visual cues. In an earlier paper our group described an experiment that evaluated seven different visualization modes for the spinal column regarding depth perception. This paper describes the technical realization of one of the winners of the evaluation. This is a virtual window that can be overlaid onto the skin and provides a bordered view onto the spinal column inside the patient. Section 2 presents our AR system. Section 3 describes the setup and design of the virtual window. In section 4, we analyze the perceptive advantage of the window. 2 H ARDWARE S ETUP This section describes our AR system that consists of an optical outside-in tracking system for target tracking and an inside-out tracking system for head pose estimation. 2.1 AR System First of all, we like to introduce our AR system that allows for insitu visualization. Figure 3 gives a complete overview about an AR system in surgical use. For superior registration quality the system

A PC based computer is used to render 3D graphics, to compute and include tracking data and to synchronize and combine imagery of virtual and real entities. The specification is Intel Xeon(TM), CPU 3,20 GHz, 1,80 GB RAM, NVIDIA Quadro FX 3400/4400. The window is implemented in C++ using the OpenGL. 2.2 In-Situ Visualization Our system allows for different kinds of visualization techniques such as direct and indircet (isosurfacing) volume rendering (see figures 5). In-situ visualization requires the following preparations. 1. At least four fiducial markers have to be attached to the object of interest, e.g. thorax. These markers have to be visible for the tracking cameras in the OR. Figure 2: Plastic thorax model, HMD and a marker frame for insideout tracking are some of the components of our experimental AR application. uses two synchronized tracking systems. The single camera inside-out tracking system allows for a high rotational precision [4], which is necessary for tracking the stereoscopic video-see-through head mounted display (HMD). The hardware setup is similar to the one proposed by Sauer et al. [8] for medical augmented reality. Two color cameras rigidly attached to the HMD simulate the eye s view. An additional infrared camera mounted on the HMD tracks a marker frame (figure 2) for head pose estimation [11]. There are two major reasons why to choose a video see-through system. Real and virtual imagery can be optimally synchronized to avoid time lags between the images of the camera, which would lead to undesirable and for the user fatiguing effects like perceivable jitter or swimming [10]. Second the system allows for more options how to combine real and virtual imagery like occluding real objects since we have full control over the real images while optical systems offer only a brightening augmentation. The optical outside-in tracking system from A.R.T GmbH (Weilheim, Germany) with four cameras fixed to the ceiling covers a large working area, i.e. 3x3x2 m. The system is capable of tracking the targets in our setup with an accuracy of < 0.35[mm] RMS. Both of the systems use the same kind of retroreflective fiducial markers offering a registration free transformation from one tracking system to the other. In order to recover the six degrees of freedom of a rigid body, the external optical tracking system requires at least four rigidly attached markers. Fiducial markers are attached to the patient lying on the operating table (see figure 2) and further surgical instruments. The marker frame target has an exceptional function as it enables the transition between the inside-out and the outside-in tracking systems. Both tracking systems calculate the same coordinate system respective the reference target. All augmentations of targets, which are tracked by the optical outsidein tracking system, have to be positioned respectively the marker frame of the inside-out tracking system. The following equation calculates the transformation anytarget H f rame from the marker frame to an exemplary target ( to H f rom ). anytarget H f rame = anytarget H ext ( f rame H ext ) 1 (1) anytarget H ext and f rame H ext are the transformations provided by the optical outside-in tracking system. The former describes the transformation respective the origin of the tracking system to a target, the latter is the transformation from the origin of the tracking system to the marker frame for inside-out tracking. 2. The object of interest, e.g. part of the thorax, has to be scanned by CT or MRI to get a three dimensional data volume. 3. Registration: The fiducial markers are segmented automatically from the data volume to be able to align the virtual data with the real tracked object. 4. Choice of visualization technique. The technique of direct volume rendering is able to display every part of the data volume with a certain value for color and transparency. Therefore a predefined number of planes parallel to the image plane are clipped against the volume boundaries. All planes are rendered by interpolating within the volume and blending appropriately. Intensity values in the volume domain are mapped to the three dimensional color space using transfer functions. This enables accentuation of interesting structures. Indirect volume rendering concerns the extraction of surface models from the data volume. Areas of interest, e.g. bones or blood vessels can be determined due to their intensity values in the volume domain. The marching cube algorithm is parameterized with a certain threshold to segment a homogeneous area within the data volume and generates a surface model. Surface models can be designed with color, transparency and textures. The presentation of volume rendered objects is more computationally expensive than display of surface models. Our system renders the volume rendered spinal column with 5-6 fps and its surface model with 30 fps. Positioning the visualization of the spinal column inside the thorax within our AR scenario can be described by the transformation visual H f rame. visual H f rame = visual H thorax thorax H ext ( f rame H ext ) 1 (2) thorax H ext and f rame H ext are the transformations provided by the optical outside-in tracking system. visual H thorax represents the registration matrix to alight virtual data with the real tracked object. 3 VIRTUAL WINDOW The following section introduces a virtual window, which can be overlaid onto the skin of the patient. The user of the HMD observes the inside of the patient through this vision panel. 3.1 Position the Window Placing the window to get the desired view into the patient can be performed without touching or moving the patient. While positioning the window, the observer wearing the HMD views a frame as shown in figures 4. The observer guides the frame to the area of interest by moving his or her head as shown in figures 4. If the frame

Figure 3: Our augmented reality tracking system consists of an outside-in tracking system, and a video see-through inside-out tracking system. is red, the window cannot be set because some of the grid points are not located on the skin. When the frame is green the window can be set by key press. The size is adjustable by mouse interaction, which Figure 4: Window can be set when the frame is green, i.e. when all grid points lie on the surface of the virtual skin. can be performed by an assistant on an external monitor that shows a copy of the imagery presented by the displays of the HMD. The window adopts the shape of the skin. Therefore a marching cube algorithm [7] segments the medical data volume and triangulates a surface model of the skin. The frame of the window defines the borders of a structured 2D grid consisting of a certain number of grid points. For every grid point a so-called picking algorithm examines the depth buffer at its corresponding pixel and recalculates three dimensional information of the nearest virtual object. If some of the grid points lie on the far plane of the frustum and not on the surface of the skin the color of the frame turns to red and the window can not be set. Discrimination of grid points on the far plane and triangulation of the remaining grid points on the skin would be another approach. However, we decided to allow for setting up the window completely or not at all because a rectangular frame helps to perceive the position, shape and orientation of the window. Picking in OpenGL can be realized with the following two functions. void glreadpixels(glint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *pixels) int gluunproject(gldouble winx, GLdouble winy, GLdouble winz, const GLdouble modelmatrix[16], const GLdouble projmatrix[16], const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz) After determination of their position in 3D space, the grid points are connected to compose a transparent surface. When the position of the window is defined, it is used to mask the part of the scene, which is inside the thorax. Therefore we employ the so-called stencil buffer. The stencil buffer is an additional buffer besides the color buffer and depth buffer found on modern computer graphics hardware. One of the major application of the stencil buffer is to limit the area of rendering. In our application the area is limited to the window when the visualized tissue or bones are drawn. After that the window surface and all other objects, which are partially or completely outside the body, e.g. augmented surgical instruments, are rendered. 3.2 Window Design The following design features help to intensify the depth cues provided by the window. The window is overlaid onto the skin. Its shape adapts the shape of the skin. Accuracy of overlay is defined by the level of detail of the surface model of the skin and the number of grid points of the window. Certain material parameters let the window appear like glass. Highlight effects due to the virtual light conditions support depth perception. Highlights on the window change the color

of objects behind the window or even partially occlude these objects. The window plane is mapped with a simply structured texture, which enhances the depth cue motion parallax. Due to motion of the observer the texture on the window seams to move relatively faster than objects behind the window. Furthermore, the texture helps to perceive the shape of the window [13]. The frame of the window is colored. A second frame parallel to the outer one simulates the thickness of the window. Borders of the inner frame are visible depended to the current position of the observer. The background of the virtual objects seen through the window can be set to transparent or opaque. Volume rendered objects as well as surface models can be positioned behind the window. Figure 5 and figure 7 illustrate these rendering modes regarding the exemplary application dorsal surgery. The following section describes the perceptive advantages of the virtual window. 4 P ERCEPTIVE G AIN Depth perception is still a major problem in many AR systems when virtual entities can only be displayed superimposed on real imagery. Cutting et al. summarized the most important binocular and monocular depth cues [3]. Our AR scene is perceived binocularly with the two color cameras mounted on the HMD. Stereopsis is realized by the slightly different perspectives of the two cameras. Convergence is predefined by the orientation of the cameras. Regarding pictorial and monocular motion induced depth cues, the most effective cue, occlusion, is responsible for the misleading depth perception when, in our case, the virtual spinal column occludes the real thorax. Adding the augmentation of the skin enables the observer to visually reorder the group of involved objects. The skin could be drawn transparent and positioned around the vertebrae. However, transparency provides only few information about relative position of the objects if one object is placed completely behind or in front of the other object as shown in figure 5. The window enhances perceptive information about depth because it partially occludes the vertebrae and the frame of the window covers and uncovers parts of the spinal column while the observer is moving. The latter depth cue motion parallax is after occlusion and stereopsis the third most effective source of information about depth [3]. Figure 7: Volume rendered spinal column behind the window. 5 R ESULTS AND C ONCLUSION We present a virtual window within a medical AR scenario that helps to overcome the misleading depth perception caused by the superimposed virtual spinal column onto the real thorax. An earlier experiment [12] compared seven different visualization modes of the spinal column regarding depth perception. We evaluated the visualization of a surface model of the spinal column behind a virtual window that is overlaid onto the skin as one of the two best methods. The method of posing the window interactively into the scene has the advantage that the surgeon or personnel of the OR do not have to use a further instrument that has to be kept sterile and wasts space. The observer wearing the HMD can easily position and reposition the window by moving his or her head. Figures 5 show a sequence while the observer is moving the HMD respective the thorax with the attached window. Beside surface models (figure 8) also volume rendered objects can be presented behind the window as shown in figure 7. The window provides the effective depth Figure 8: Surface model of the spinal column behind the window. cues occlusion and motion parallax. However, if the surgeon moves his or her hand in front of his field of vision through the area of the window, this perceptive advantage is lost as shown in figure 9. Figure 5: If the observer does not know the color of the planes and one of the objects completely occludes the other one, he or she cannot be sure about the relative position of the planes. 6 ACKNOWLEDGMENT Special thanks to Tobias Sielhorst, Joerg Traub, Marco Feuerstein, Stefan Wiesner and Philipp Stefan for their support. This work was granted by the BFS within the NARVIS project (www.narvis.org).

Figure 6: Sequence shows the window from different perspectives. Each perspective provides another view on the inside of the throax. Figure 9: When the observer moves real objects within his or her field of vision like his hand or surgical instruments, the perceptive advantages due to the virtual window is lost again. R EFERENCES [1] Michael Bajura, Henry Fuchs, and Ryutarou Ohbuchi. Merging virtual objects with the real world: seeing ultrasound imagery within the patient. In Proceedings of the 19th annual conference on Computer graphics and interactive techniques, pages 203 210. ACM Press, 1992. [2] Wolfgang Birkfellner, Michael Figl, Klaus Huber, Franz Watzinger, Felix Wanschitz, Johann Hummel, Rudolf Hanel, Wolfgang Greimel, Peter Homolka, Rolf Ewers, and Helmar Bergmann. A headmounted operating binocular for augmented reality visualization in medicine design and initial evaluation. IEEE Trans. Med. Imag., 21(8):991 997, 2002. [3] James E. Cutting and Peter M. Vishton. Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), Perception of Space and Motion, pages 69 117, 1995. [4] W. A. Hoff and T. L. Vincent. Analysis of head pose accuracy in augmented reality. IEEE Trans. Visualization and Computer Graphics, 6, 2000. [5] A. P. King, P. J. Edwards, C. R. Maurer, Jr., D. A. de Cunha, D. J. Hawkes, D. L. G. Hill, R. P. Gaston, M. R. Fenlon, A. J. Strong, C. L. Chandler, A. Richards, and M. J. Gleeson. A system for microscopeassisted guided interventions. IEEE Trans. Med. Imag., 19(11):1082 1093, 2000. [6] A.P. King, P.J. Edwards, C.R. Maurer, Jr., D.A. de Cunha, R.P. Gaston, M. Clarkson, D.L.G. Hill, D.J. Hawkes, M.R. Fenlon, A.J. Strong, T.C.S. Cox, and M.J. Gleeson. Stereo augmented reality in the surgical microscope. Presence: Teleoperators and Virtual Env., 9(4):360 368, 2000. [7] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. In SIGGRAPH 87: Proceedings of the 14th annual conference on Computer graphics and interactive techniques, pages 163 169, New York, NY, USA, 1987. ACM Press. [8] Frank Sauer, Ali Khamene, Benedicte Bascle, Sebastian Vogt, and Gregory J. Rubinob. Augmented reality visualization in imri operating room: System description and pre-clinical testing. In Proceedings of SPIE, Medical Imaging, volume 4681, pages 446 454, 2002. [9] Frank Sauer, Ali Khamene, and Sebastian Vogt. An augmented reality navigation system with a single-camera tracker: System design and needle biopsy phantom trial. In Proc. Int l Conf. Medical Image Computing and Computer Assisted Intervention (MICCAI), 2002. [10] Frank Sauer, Uwe J. Schoepf, Ali Khamene, Sebastian Vogt, Marco Das, and Stuart G. Silverman. Augmented reality system for ct-guided interventions: System description and initial phantom trials. In Medical Imaging: Visualization, Image-Guided Procedures, and Display, 2003. [11] Frank Sauer, Fabian Wenzel, Sebastian Vogt, Yiyang Tao, Yakup Genc, and Ali Bani-Hashemi. Augmented workspace: designing an ar testbed. In Proc. IEEE and ACM International Symposium on Augmented Reality, pages 47 53, 2000. [12] Tobias Sielhorst, Christoph Bichlmeier, Sandro Heining, and Nassir Navab. Depth perception a major issue in medical ar: Evaluation study by twenty surgeons. In MICCAI, 2006. [13] James T. Todd, Lore Thaler, and Tjeerd M.H. Dijkstr. The effects of field of view on the perception of 3d slant from texture. Vision Research, 45(12):1501 1517, 2005.