Navigation in Virtual Environments through Motor Imagery



Similar documents
Electrophysiology of the expectancy process: Processing the CNV potential

P300 Spelling Device with g.usbamp and Simulink V Copyright 2012 g.tec medical engineering GmbH

Go to contents 18 3D Visualization of Building Services in Virtual Environment

Brain Computer Interface Research at the Wadsworth Center

Brain Computer Interfaces (BCI) Communication Training of brain activity

Video-Based Eye Tracking

Tracking devices. Important features. 6 Degrees of freedom. Mechanical devices. Types. Virtual Reality Technology and Programming

Two Brains, One Game: Design and Evaluation of a Multi-User BCI Video Game Based on Motor Imagery

Electroencephalography Analysis Using Neural Network and Support Vector Machine during Sleep

OpenViBE: An Open-Source Software Platform to. and Virtual Environments

Human Computer Interface Issues in Controlling Virtual Reality With Brain Computer Interface

Not long ago, paralysis and advanced palsy

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING 1

Implementation of a 3-Dimensional Game for developing balanced Brainwave

OpenViBE: An Open-Source Software Platform to Design, Test and Use Brain-Computer Interfaces in Real and Virtual Environments

Building a Simulink model for real-time analysis V Copyright g.tec medical engineering GmbH

Documentation Wadsworth BCI Dataset (P300 Evoked Potentials) Data Acquired Using BCI2000's P3 Speller Paradigm (

Frequency Response of Filters

3D Viewer. user's manual _2

3D Interactive Information Visualization: Guidelines from experience and analysis of applications

Effects of Orientation Disparity Between Haptic and Graphic Displays of Objects in Virtual Environments

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY

Data Analysis Methods: Net Station 4.1 By Peter Molfese

Basic Theory of Intermedia Composing with Sounds and Images

Approaching VR 2.0: Creating and Sharing Open Source Virtual Environments for Health Care and Research

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Intuitive Navigation in an Enormous Virtual Environment

ANIMA: Non-Conventional Interfaces in Robot Control Through Electroencephalography and Electrooculography: Motor Module

Experiment 5. Lasers and laser mode structure

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

Laboratory Guide. Anatomy and Physiology

Computer Aided Liver Surgery Planning Based on Augmented Reality Techniques

Figure 1. Basic Petri net Elements

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

Lab 4 - Data Acquisition

Evaluation of Attention and Relaxation Levels of Archers in Shooting Process using Brain Wave Signal Analysis Algorithms. NeuroSky Inc.

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS.

Experiment #11: LRC Circuit (Power Amplifier, Voltage Sensor)

ε: Voltage output of Signal Generator (also called the Source voltage or Applied

dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor

PRODUCT SHEET.

A Cell-Phone Based Brain-Computer Interface for Communication in Daily Life

Manual Analysis Software AFD 1201

BrainMaster tm System Type 2E Module & BMT Software for Windows tm. Helpful Hints

Computer Graphics. Computer graphics deals with all aspects of creating images with a computer

Valor Christian High School Mrs. Bogar Biology Graphing Fun with a Paper Towel Lab

Visual Structure Analysis of Flow Charts in Patent Images

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Space Perception and Binocular Vision

Visual area MT responds to local motion. Visual area MST responds to optic flow. Visual area STS responds to biological motion. Macaque visual areas

Electroencephalogram-based Brain Computer Interface: An Introduction

EasyC. Programming Tips

Trigonometric functions and sound

Vibrations can have an adverse effect on the accuracy of the end effector of a

An Iterative Image Registration Technique with an Application to Stereo Vision

UNIT 1 INTRODUCTION TO NC MACHINE TOOLS

Synchronization of sampling in distributed signal processing systems

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

RESEARCH PAPERS FACULTY OF MATERIALS SCIENCE AND TECHNOLOGY IN TRNAVA SLOVAK UNIVERSITY OF TECHNOLOGY IN BRATISLAVA

A Game of Numbers (Understanding Directivity Specifications)

Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors

Agilent Automotive Power Window Regulator Testing. Application Note

Template-based Eye and Mouth Detection for 3D Video Conferencing

Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database

A Review of Brain-computer Interface Design and Development

AUDIO POINTER V2 JOYSTICK & CAMERA IN MATLAB

FPGA Implementation of Human Behavior Analysis Using Facial Image

Proceeding of 5th International Mechanical Engineering Forum 2012 June 20th 2012 June 22nd 2012, Prague, Czech Republic

Masters research projects. 1. Adapting Granger causality for use on EEG data.

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm

Freehand Sketching. Sections

SAMPLE CHAPTERS UNESCO EOLSS DIGITAL INSTRUMENTS. García J. and García D.F. University of Oviedo, Spain

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

6 Space Perception and Binocular Vision

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Music technology. Draft GCE A level and AS subject content

Technique and Safety of. by Pierluigi Castellone, Electronics Engineer Brain Products General Manager

Convention Paper Presented at the 112th Convention 2002 May Munich, Germany

TECHNICAL SPECIFICATIONS, VALIDATION, AND RESEARCH USE CONTENTS:

Establishing the Uniqueness of the Human Voice for Security Applications

An ERP-based Brain-Computer Interface for text entry using Rapid Serial Visual Presentation and Language Modeling

A method of generating free-route walk-through animation using vehicle-borne video image

RF System Design and Analysis Software Enhances RF Architectural Planning

Web-based home rehabilitation gaming system for balance training

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Classifying Manipulation Primitives from Visual Data

The Action Potential Graphics are used with permission of: adam.com ( Benjamin Cummings Publishing Co (

Translog-II: A Program for Recording User Activity Data for Empirical Translation Process Research

Reflection and Refraction

The Effective Number of Bits (ENOB) of my R&S Digital Oscilloscope Technical Paper

System-Level Display Power Reduction Technologies for Portable Computing and Communications Devices

Transcription:

Navigation in Virtual Environments through Motor Imagery Robert Leeb a, Reinhold Scherer a, Felix Lee b, Horst Bischof b, Gert Pfurtscheller a,c a Institute of Human-Computer Interfaces, Graz University of Technology b Institute for Computer Graphics and Vision, Graz University of Technology c Ludwig Boltzmann-Institute for Medical Informatics and Neuroinformatics, Graz University of Technology Inffeldgasse 16a, A-8010 Graz, Austria e-mail: robert.leeb@tugraz.at Abstract In this paper we report on the navigation in a virtual environment by the output signal of an EEG-based Brain-Computer Interface (BCI). Such a BCI transforms bioelectrical brain signals, modified by mental activity into a control signal. At this time only 1-D or 2-D BCI feedbacks are used. The graphical possibilities of virtual reality (VR) should help to improve BCI-feedback presentations and create new paradigms, with the intention to obtain a better control of the BCI. In this study the subjects had to imagine left or right hand movements and thereby exploring a virtual conference room. With a left hand motor imagery the subject turned in the room to the left and vice versa. Three trained subjects reached 77% to 100% classification accuracy in the course of the experimental sessions. 1 Introduction An electroencephalogram (EEG) based Brain-Computer Interface (BCI) is a communication system and represents a direct connection between the human brain and the computer. Mental activities or thoughts result in changes of electro-physiological brain signals. A BCI is able to detect and translate these changes into operative control signals. Imagination of movement effects similar neural networks in the brain as real execution of the same movement [7]. Therefore motor imagery is one important control strategy in BCI applications [10] and used to operate e.g. computer-controlled spelling devices (virtual keyboard) in patients with lockedin-syndrome [6] or neuroprosthesis in patients with spinal cord injury [8]. Patients suffering from amyotrophic lateral sclerosis (ALS) could reestablish a communication channel to their surrounding environment by controlling electric spelling devices [2]. The BCI developed in our group over the last decade, is based on the detection and classification of motor-imagery-related changes in the ongoing EEG [10]. The EEG data are classified in real-time and online feedback is given to the subject. During BCI experiments in the past

this feedback was either one-dimensional (1-D) or two-dimensional (2-D). Online feedback is given via a horizontal bar, whereby the length of the bar is equivalent to the classification output. So if a right hand movement is identified the bar goes to the right. Another feedback paradigm is the basket game, thereby the classifier output is used to modify the horizontal position of a down falling ball, so left hand imagery drops the ball into the left basket [5]. Of course all different kinds of feedbacks are possible, also selecting letters for writing or steering a prosthesis or a robot [11], but all of these feedbacks are simple visual presentations. In contrast, virtual reality (VR) is a powerful tool to generate three-dimensional (3-D) feedback with arbitrary adjustable complexity. The visualized picture can be a very simple and naked room, till a highly complex world, with textures added and animated moving objects [13]. In the presented work an EEG-based BCI is combined with VR as feedback medium. VR can provide dynamic and controllable experimental environments and is an optimally suited technology to create and design different types of realistic 3-D feedback. The combination of VR and BCI [1] opens up new possibilities for diagnostic and therapeutic arrangements. In this study the control signal from the BCI is used to navigate in a virtual environment (VE). At the moment only 2 classes are controllable, therefore only 2 directions of motions are possible. In the presented experiment the subjects had to explore the VE by turning left or right. This rotation was piloted by the imagination of a left or right hand movement. So a left motor imagery turned the subject to the left and vice versa. 2 Methods 2.1 Graz Brain-Computer Interface In the Graz-BCI the EEG is recorded over the sensorimotor cortex during the imagination of different types of movements (e.g. left or right hand movement), online processed, classified and transformed into a control signal [11]. In particular the event-related desynchronization (ERD) is used to convert brain states into control signals [10]. Characteristic spatial patterns can be found in the ERD depending on the preparation of the type of movement. This EEG phenomenon of the awake brain is characterized by a decrease in amplitudes (blocking, desynchronization) and corresponds to a state of higher cortical activity. While preparing to make a movement or imagining of such a movement, the ERD shows different spatial patterns depending on whether the left or right hand is involved. The ERD occurs contralateral prior to a real or during an imagined movement. This means that the preparation of or thinking on a right hand movement results in a power decrease over the left central hemisphere and vice versa. Imaginations of different movements result in characteristic spatial ERD patterns over the corresponding sensorimotor cortex [9]. The used BCI system consists of an EEG amplifier (g.tec, Austria), a data acquisition card (National Instruments) and a standard PC running WindowsXP (see Fig. 1). The raw EEG is recorded via two bipolar derivations, amplified, filtered and stored to hard disk. The recording and timing is handled via MATLAB 6.5 (MathWorks Inc., Natick, USA) in combination with Simulink 5.0 and Real-Time Workshop 5.0 [4].

VR Application VRjuggler OpenGL SGI Performer Windows 2000 Hardware (incl. 3D card) BCI Application SIMULINK MATLAB Real-Time Kernel Windows XP Hardware (incl. DAQ card) PC System A (Rendering & Tracking) PC System B (Physiological Monitoring) 3D Graphics Card I/O DAQ Card Left Eye Right Eye InertiaCube2 Tracking System Joystick Wand keyboard EEG amplifier Virtual Research V8 HMD Figure 1: Block diagram of the BCI-VR system. The VR system on the left of the diagram gets input information from the tracking system, from a joystick and from a keyboard. Two stereoscopic images are calculated and sent to the HMD. The BCI system (right side) has the EEG as input and calculates a control signal. This signal is used to steer in the virtual environment. On the top of the diagram the software layers of both systems are displayed. The VR application layer has access to both VR Juggler and Performer. The real-time BCI application is running simultaneously on Simulink and MATLAB. The subjects are instructed to imagine a left or a right hand movement, depending on the trigger cue presented on a monitor. From scalp electrodes placed over the motor cortex, bipolar EEG channels are recorded and bandpass filtered between 0.5 and 30Hz. Artifacts, such as muscle and eye movements, were not discarded for online EEG analysis. The EEG is composed of different types of oscillatory activities, whereby oscillations in the alpha (8-12Hz) and in the beta (16-24Hz) band are particularly important to discriminate between different brain states. So a possibility to extract features from the EEG is to bandpass filter the EEG and then to calculate the logarithmic band power in the alpha and beta band. These features from all EEG channels are transformed with a linear discriminant analysis (LDA) between the two classes (e.g. left and right motor imagery) into a control signal. 2.2 Virtual reality system Many of the current and future VR experiments to be carried out in relation to the BCI require seated subjects to be placed in an immersive 3-D environment. Therefore, lower cost hardware system has been specified to cater for this scenario. A head mounted display (HMD) system

Brain-Computer Interface Signal Processing Virtual Reality System Application EEG Acquisition Application Interface Navigation by thoughts VR & OpenGL Framework EEG Signals Amplifier Tracker Electrodes HMD Visual Representation Figure 2: Schematic model of the combined framework with the BCI system on the left and the VR system on the right side. The BCI-PC sends extracted and classified EEG-parameters to the VR-PC and controls therewith the visual feedback ( navigation by thoughts ). The visual representation together with the BCI and the VR represents a closed loop system. was chosen, with the inclusion of a separate tracking system to monitor head movements in real-time. Rendering systems must be capable of real-time rendering of typical environments, delivering two separately rendered images to each stereo display (see Fig. 1). Low cost graphic cards targeted at the game industry are becoming capable of achieving this for professional and research use and have therefore been incorporated. A Virtual Research V8 HMD was chosen, with 640 x 480 pixels and a refresh rate of 60 Hz for both eyes. Because of the required EEG recording during VR experiments standard electro-magnetic tracking hardware was disregarded, due to the electrical sensitivity of the monitoring systems. Tracking techniques that would not affect the monitors include ultrasonic, vision and inertial systems. Due to the seated nature of the subjects and cost constraints, an InterSense InertiaCube2 3-DOF (degrees of freedom) system was chosen for use. This provides directional information (being rotational around the three primary axes) for the HMD using inertial integration methods, disregarding positional information. As VR software VR Juggler 1.0.6 and SGI Performer 3.0 were selected for use, running under Windows 2000 on a normal PC [14]. VR Juggler offers very basic, integrated support for existing rendering APIs, such as SGI OpenGL, SGI Performer, Open Inventor and Open Scene Graph. Therefore migrating of any developed or existing software components is possible. The practical link between the two systems is done via a modified keyboard (see Fig. 1). The BCI maps the classifier output via a digital I/O of the DAQ card on this modified keyboard and the resulting effect is the same as a user would press the appropriate button on the VR keyboard. With the BCI output signal the subject s position in the VR system can be changed (see Fig. 2). So the subject sees a thought-modified picture of the VE.

Figure 3: Dual rendered picture of a virtual conference room with VR Juggler 1.0.6 and Performer 3.0 for stereo HMD usage. 3 Experiments and results 3.1 Subjects and experimental setup Four healthy subjects (age 26.2 ± 3.3) all familiar with the Graz-BCI were selected to participate in this study. Each volunteer was sitting in a relaxing chair and was instructed not to move. The subject was either sitting 150cm in front of a computer screen (standard BCI [11]) or was wearing a HMD. Two bipolar EEG channels (electrodes located 2.5 cm anterior and posterior to C3 and C4, respectively, according to the international 10-20 system) were recorded with a sampling frequency of 128 Hz and the band power (BP) was calculated sample-by-sample for 1-sec epochs in the alpha and beta bands. Two frequency bands selected from each EEG channel resulting in 4 BP features. Each subject was measured 4 times on different days (called sessions) and at each session one training and 3 feedback runs have been performed. The data of the training run were used to compute a LDA classifier and the error rates were calculated by a 10 times 10-fold cross-validation LDA-training. The estimated classifier with the best classification error during the feedback time (see Fig. 4) was selected for further usage in the feedback runs. In the feedback run the output of this classifier was used to control either the length and orientation of a horizontal bar or it was used as a rotation index within a virtual conference room (see Fig. 3). Training runs and bar feedback runs were performed using the computer screen. Only for the feedback runs within the virtual conference room the HMD was used. Depending on the affiliation of the cue, the subject was instructed to imagine a left or right hand movement. During the BCI bar feedback experiments, the feedback is given continuously during 5 seconds (see Fig. 4). The output of the classifier varies between -1 and 1, whereby negative numbers corresponds to the left class and positive numbers to the right one, respectively. The classification output is equivalent to the length of the bar during this kind of feedback. Using the same principle of presenting feedback in the conference room task would be, that the output of the classifier is directly mapped to the rotation angle, so negative classifiers results in left rotations (between 0 and 90 ) and positive in right ones. A varying view of the conference room is presented to the subject depending on this rotation angle. Modification of

cue beep Classifier Fixation cross CUE Feedback Pause 0 1 2 3 4 5 6 7 8 time in s Figure 4: Each run of the paradigm starts with a fixation cross, between second 3 and 4.25 the cue information as an arrow pointing to the left or right is superimposed onto this cross. Additionally a cue beep is given at second 3. In the case of feedback sessions the classifier output is presented till second 8. the rotation angle denotes that the field of view of the subject changes, thus the feeling should be as the subject is turning around itself. The LDA classification output is not very smooth and therefore classifier changes leads to strong modifications of the rotation angle. To avoid jumps of the rotation angle only the relative rotation information is used instead of an absolute one, thereby only the class affiliation is utilized, the value of the LDA is disregarded. So if the categorization was left the subject was rotating with constant speed at a predefined angle to the left or vice versa. The turning information is summed up over the time of the trial so a rotation by 90 to the left or right is possible. Additionally a small classification margin was introduced to make this feedback more suitable to the subject. If the output of the classifier was smaller than the defined margin, no movement in the VE happened. This neutral zone allowed the subject to have a smooth switch-over between the two turnings, so not abrupt change happened. The best threshold value of the margins for each class is estimated via ROC curves (Receiver Operating Characteristic) of the bar feedback run. 3.2 Timing of the paradigm Each trial lasted 8 seconds (see Fig. 4) and the time between two trials was randomized in a range from 0.5 to 2 seconds to avoid adaptation. The EEG was recorded continuously and stored on a hard disk. The experiment started with a fixation cross in the center of the monitor. Between second 3 and 4.25 an arrow was superimposed onto the cross and this cue stimulus pointed to the left or right. Simultaneous a cue tone was given. A low tone (700Hz) represented the left class and a high tone (1500Hz) the right one, respectively. Depending on the direction of the arrow, the subject was instructed to imagine a left or right hand movement. The additional class information via tone pitch was applied all the time to train and adapt the subjects to this kind of stimulus, because during the HMD experiments just the tone and no visual cue stimulus was featured. This solution was chosen, because it is not so easy to present the class information visually within the VE. Each session consisted of four experimental runs of 40 trials each (20 left and 20 right cues) and lasted about one hour. The sequence of right and left cues was randomized through each run.

a.) LDA Distance[ 1<0<1] 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 right left 1 0 1 2 3 4 5 6 7 8 t[sec] b.) time [sec] 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 75% 55% 65% 65% 50% 58% 40% 80% right 60% 50% 80% left 65% 45% 75% 60% 45% 80% 63% 75% 68% 60% 65% 85% 75% 55% 85% 70% 70% 95% 83% 80% 95% 88% 80% 90% 85% 80% 100% 90% 95% 100% 98% 95% 100% 98% 90% 100% 95% 85% 100% 93% 85% 95% 90% 85% 90% 88% 90% 85% 88% 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 LDA distance[ 1<0<1] classification accuracy Figure 5: The left panel (a) shows the LDA output. The two thick lines are the mean classification errors of the two classes and the thin lines are the appropriate standard deviation borders. The right panel (b) displays the feedback bar diagram of the same run. The classification accuracy (in percentage) is written beside the bars. The dotted lines in both panels are the two classification margins. 3.3 Preliminary results Three out of four subjects achieved promising results with both feedback systems. One subject was removed from the study, because she was too much distracted from the feedback itself, independently of the kind of feedback. Representative results of subject o3 (session 2, run 3) are presented in Figs. 5 and 6. Right hand imagery is plotted in bright dashed and left hand in dark solid. The classifier obtained with a 10x10-fold cross validation and a LDA of the training data are used to categorize the measured data. Fig. 5a shows the output of the LDA. The results for both classes are displayed in the same figure. Beginning with second 4 both classes can be discriminated easily. During a normal feedback run the output of the classifier is presented to the subject as a horizontal bar. In Fig. 5b an average time course of feedback bars is displayed. Beside the bars the classification accuracy of both classes is written, which reaches 98%. For the sessions with the virtual conference room as feedback also the values for the classification margins are necessary. The best threshold value of the margins of each class is estimated via ROC curves. The acquired values for the classification margins are -0.2 for the left threshold and 0.2 for the right one. The achieved rotation angle is plotted in Fig. 6a. The maximum gainable angle is 90 at second 8, whereby the mean of the right class reaches this point. The standard deviation borders of both classes show that sometimes the subject had slight problems to perform a gyration into the correct direction. The problem of the cumulative feedback is, that it takes long to correct the errors of the first seconds. For example the lower standard deviation of the left class in Fig. 6a shows that the classifier identified wrongly a right hand movement till second 4.5. Between second 4.5 and 5 the output of the classifier was in the neutral zone. Afterwards a rotation to the correct side was performed. After second 7.5 the classification result was again between the two classification margins.

a.) 80 60 right left b.) 80 60 right left Grand average of means and standard deviation Rotation angle[ 90 <0 <90 ] 40 20 0 20 40 Rotation angle[ 90 <0 <90 ] 40 20 0 20 40 60 60 80 80 4.5 5 5.5 6 6.5 7 7.5 8 t[sec] 4.5 5 5.5 6 6.5 7 7.5 8 t[sec] Figure 6: The left panel (a) shows the cumulative achieved rotation. Mean angles are plotted in thick and the standard deviation borders in thin lines. The maximum achievable angle is 90 at second 8. In the right panel (b) the grand average of all rotation angles measured over all subjects and all runs is displayed. The grand average of all rotation angles calculated over all subjects and all sessions is displayed in Fig. 6b. The separability between the two classes is very high. The reason why the standard deviation borders at second 8 are higher than at second 5 (for example) is, because of the cumulative presentation of the results. All subjects achieved a rotation to the left or right by approx. 45 during one trial. The offline analysis showed that the LDA classification error obtained with the HMD was between 20% and 0% and with the normal computer screen between 18% and 0%. Of major importance is that no disturbing effects from the HMD or the tracker were found in the EEG. 4 Discussion and conclusion The experimental study is still in progress, therefore only preliminary results can be presented. This study showed that the combination of a BCI with VR as feedback medium is possible. No technical or systematic interferences or disturbances of this combination could be found. The subjects were able to navigate in this very simple VE, by using motor imagery. The grand average in Fig. 6b shows that the mean gyration was 45 during one trial, although the maximum achievable angle of 90 was reached by some subjects. If the same directly mapped feedback as in the bar experiment would have been used for the left/right rotating one, the orientation of the subject would have changed too much and the subject would definitely have lost the feeling of performing a real movement. It is very important that the feedback presented to the subject is as real as possible. Feedback behaving in an unknown or unpredicted way, not as it is expected in the real world, may cause problems. Such effects can disturb the whole focused attention and interfere with the immersion to the specific task.

Presenting the cue only acoustically in the HMD run is definitely different to the standard BCI as used for the first feedback run with the monitor. The acoustic cue was always the same during all runs, but in the training run and in the first feedback run also the visual cue was presented. Subjects reported that they got sometimes undetermined if this tone was now a left (low frequency) or a right cue (high frequency), especially if a series of tones was coming from the same class consecutively. In such cases it took some time to identify the intended class. Visual information can be identified better than acoustic one. Therefore it would be desirable to add also some visual cues inside the virtual environment. Another idea is to use a single versus a pair of beeps instead of low and high frequency tones, which would be an unambiguous acoustic cue. The aim of this study was to show that it is possible to combine the BCI with VR as feedback media. No investigations have been performed to analyse if VR feedback stimulates the subject to achieve a better performance or to decrease the training effort. Instead of using band power and LDA also other features could be extracted from the EEG using adaptive autoregressive models (AAR), common spatial patters (CSP) or hidden markov models (HMM) [10]. Also different mapping methods like principal component analysis (PCA) or independent component analysis (ICA) and different classifier such as neural networks could have been used [10, 3]. The problem in EEG analysis is that finding optimized electrode positions and frequency bands is the most important part. Nevertheless different methods will be investigated in the future. In the presented experiment a 2-class BCI was used, so only two directions (left/right) were possible for navigation. Increasing the number of classes in the BCI would increase the motion degrees of freedom in the VE, which would allow the subject to really navigate (left/right, forward/backward, up/down) through virtual environments. Increasing the number of classes in the BCI, leads to difficulties in finding suitable independent motor imagery states which still can be discriminated. The described experiments were cue-based, so the subject was instructed to imagine a movement after each cue stimulus. The point in time when a reasonable detection of the imagined movement is possible starts about 2 seconds after cue presentation (see Figs. 4 and 5). It takes time to identify the cortical motor imagery task. This means caution is necessary when the motor imagery task is used in time critical experiments. One characteristic component for virtual reality is immersion, which is the ability to focused attention to virtual environment [12]. In addition, also motor imagery is a conscious process and accompanied by focused attention to the intended motor task. This raises the question whether motor imagery is a suitable control strategy to navigate in virtual environment? Further investigations are necessary to solve this problem and maybe to search for other mental control strategies. Acknowledgement This work was funded by the European PRESENCIA project (IST-2001-37927) and partly by the Austrian Science Fund, project P16326-BO2. Special thanks to Lee Bull from UCL-CS in London for his help during the setup of the VRjuggler system.

References [1] Bayliss J.D. and Ballard D.H., A Virtual Reality Testbed for BrainComputer Interface Research, IEEE Trans Rehabil Eng, Vol. 8, No. 2, pp. 188-190, June 2000. [2] Birbaumer N., Kübler A., Ghanayim N., Hinterberger T., Perelmouter J., Kaiser J., et al., The Thought Translation Device (TTD) for Completely Paralyzed Patients, IEEE Trans Rehabil Eng, Vol. 8, No. 2, pp. 190-193, June 2000. [3] Graimann B., Movement-related patterns in ECoG and EEG: Visualization and Detection, PhD thesis, Graz University of Technology, Graz, Austria, September 2002. [4] Guger C., Schlögl A., Neuper C., Walterspacher D., Strein T. and Pfurtscheller G., Rapid Prototyping of an EEG-Based Brain-Computer Interface (BCI), IEEE Trans Neural Syst Rehabil Eng, Vol. 9, No. 1, pp. 49-58, March 2001. [5] Krausz G., Scherer R., Korisek G. and Pfurtscheller G., Critical decision-speed and information transfer in the Graz Brain-Computer Interface, Appl Psychophysiol Biofeedback, Vol. 28, No. 3, pp. 233-240, September 2003. [6] Neuper C., Müller G.R., Kübler A., Birbaumer N. and Pfurtscheller G., Clinical application of an EEG-based braincomputer interface: a case study in a patient with severe motor impairment, Clinical Neurophysiology, Vol. 114, No. 3, pp. 399-409, March 2003. [7] Neuper C. and Pfurtscheller G., Event-related dynamics of cortical rhythms: frequencyspecific features and functional correlates, Int J Psychophysiol, Vol. 43, No. 1, pp. 41-58, December 2001. [8] Pfurtscheller G., Müller G.R., Pfurtscheller J., Gerner H.J. and Rupp R., Thought control of functional electrical stimulation to restore hand grasp in a patient with tetraplegia, Neuroscience Letters, Vol. 351, No. 1, pp. 33-36, November 2003. [9] Pfurtscheller G. and Neuper C., Motor imagery activates primary sensorimotor area in humans, Neuroscience Letters, Vol. 239, No. 2-3, pp. 65-68, December 1997. [10] Pfurtscheller G. and Neuper C., Motor Imagery and Direct Brain-Computer Communication, Proc of the IEEE, Vol. 89, No. 7, pp. 1123-1134, July 2001. [11] Pfurtscheller G., Neuper C., Müller G.R., Obermaier B., Krausz G., Schlögl A., et al., Graz-BCI: State of the Art and Clinical Applications, IEEE Trans Neural Syst Rehabil Eng, Vol. 11, No. 2, pp. 177-180, June 2003. [12] Slater M. and Steed A., A Virtual Presence Counter, Presence: Teleoperators and Virtual Environments, Vol. 9, No. 5, pp. 413-434, 2000. [13] Slater M., Steed A. and Chrysanthou Y., Computer Graphics and Virtual Environments: From Realism to Real-Time, Addison Wesley Publishers, Harlow, England, 2001. [14] VR Juggler - Open Source Virtual Reality Tool, http://www.vrjuggler.org/.