A Survey of Usability Evaluation in Virtual Environments: Classi cation and Comparison of Methods

Size: px
Start display at page:

Download "A Survey of Usability Evaluation in Virtual Environments: Classi cation and Comparison of Methods"

Transcription

1 Doug A. Bowman Department of Computer Siene Virginia Teh Joseph L. Gabbard Deborah Hix [ jgabbard, hix]@vt.edu Systems Researh Center Virginia Teh A Survey of Usability Evaluation in Virtual Environments: Classi ation and Comparison of Methods Abstrat Virtual environments (VEs) are a relatively new type of human omputer interfae in whih users pereive and at in a three-dimensional world. The designers of suh systems annot rely solely on design guidelines for traditional two-dimensional interfaes, so usability evaluation is ruial for VEs. This paper presents an overview of VE usability evaluation to organize and ritially analyze diverse work from this eld. First, we disuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaes suh as GUIs. We also present a review of some VE evaluation methods urrently in use, and disuss a simple lassi ation spae for VE usability evaluation methods. This lassi ation spae provides a strutured means for omparing evaluation methods aording to three key harateristis: involvement of representative users, ontext of evaluation, and types of results produed. Finally, to illustrate these onepts, we ompare two existing evaluation approahes: testbed evaluation (Bowman, Johnson, & Hodges, 1999) and sequential evaluation (Gabbard, Hix, & Swan, 1999). 1 Introdution and Motivation Presene, Vol. 11, No. 4, August 2002, by the Massahusetts Institute of Tehnology During the past several years, virtual environments (VEs) have gained broad attention throughout the omputing ommunity. During roughly that same period, usability has beome a major fous of interative system development. Usability an be broadly de ned as ease of use plus usefulness, inluding suh quanti able harateristis as learnability, speed and auray of user task performane, user error rate, and subjetive user satisfation (Hix & Hartson 1993; Shneiderman, 1992). Despite intense and widespread researh in both VEs and usability, until reently there were very few examples of researh oupling VE tehnology with usability a neessary oupling if VEs are to reah their full potential. Reently, there has been a notable (and gratifying) inrease in researhing and applying usability in virtual environments (Gabbard, Hix, & Swan, 1999; Tromp, Hand, Kaur, Istane, & Steed, 1998; Johnson, 1999; Volbraht & Paelke, 2000). By fousing on usability from the very beginning of the development proess, developers are more likely to avoid reating interation tehniques (ITs) that do not math appropriate user task requirements and to avoid produing standards and priniples for VE user interfae development that are nonsensial. This paper fouses on usability evaluation of VEs determining how different ITs, interfae styles, and nu- 404 PRESENCE: VOLUME 11, NUMBER 4

2 Bowman et al. 405 merous other fators suh as information organization, visualization, and navigation affet the usability of VE appliations and user interfae omponents. Although numerous methods exist to evaluate the usability of interative omputer appliations, these methods have well-known limitations, espeially for evaluating VEs. For example, most usability evaluation methods are appliable only to a narrow range of interfae types (suh as graphial user interfaes, or GUIs) and have had little or no use with innovative, nonroutine interfaes suh as those found in VEs. VE appliations have interation styles that are so radially different from ordinary user interfaes that well-proven methods that produe usable GUIs may be neither appropriate nor effetive. There have been attempts to adapt traditional usability evaluation methods for use in VEs, and a few notable efforts to develop strutured usability evaluation methods for VEs. In this paper, we present a survey of some existing approahes to usability evaluation of VEs. We begin, in setion 2, by making expliit some of the important differenes between the evaluation of VE user interfaes and traditional GUIs. Next, in setion 3, we ategorize usability evaluation methods based on three important harateristis: involvement of representative users, ontext of evaluation, and types of results produed. Finally, in setion 4, we present and ompare two major approahes: testbed evaluation, whih fouses on low-level ITs in a generi ontext, and sequential evaluation, whih applies several different evaluation methods within the ontext of a partiular VE appliation. We would like to set the ontext for this paper by explaining some terminology. First, the term usability is meant in its broadest sense: it inludes any harateristi relating to the ease of use and usefulness of an interative software appliation, inluding user task performane, subjetive satisfation, user omfort, and so on. Usability evaluation is de ned as the assessment of a spei appliation s user interfae (often at the prototype stage), an interation metaphor or tehnique, or an input devie, for the purpose of determining its atual or probable usability. Usability engineering is, in general, a term overing the entire spetrum of user interation development ativities, inluding domain, user, and task analysis; oneptual and detailed user interation design; prototyping; and numerous methods of usability evaluation. The roles involved in usability evaluation typially inlude a developer (who implements the appliation and/or user interfae software), an evaluator (who plans and onduts evaluation sessions), and a user or subjet (who partiipates in evaluation sessions). Finally, VEs inlude a broad range of systems, from interative stereo graphis on a monitor to a fully immersive, six-sided CAVE. Most of the distintive aspets of VE evaluation (setion 2), however, stem from the use of partially or fully immersive systems. 2 Distintive Charateristis of VE Evaluation The approahes we disuss in this paper for the usability evaluation of virtual environments have been developed and used in response to pereived differenes between the evaluation of VEs and the evaluation of traditional user interfaes suh as GUIs. Many of the fundamental onepts and goals are similar, but the use of these approahes in the ontext of VEs is distint. Here, we present some of the issues that differentiate VE usability evaluation, organized into several ategories. The ategories ontain overlapping onsiderations, but they provide a rough partitioning of these important issues. Note that many of these issues are not neessarily found in the literature, but instead ome from personal experiene and extensive disussions with olleagues. 2.1 Physial Environment Issues One of the most obvious differenes between VEs and traditional user interfaes is the physial environment in whih the interfae is used. In VEs, nontraditional input and output devies are used, whih an prelude the use of some types of evaluation. Users may be standing rather than sitting, and they may be moving about a large spae, using whole-body movements. These properties give rise to several issues for usability evaluation. Following are some examples.

3 406 PRESENCE: VOLUME 11, NUMBER 4 In interfaes using non-see-through head-mounted displays (HMDs), the user annot see the surrounding physial world. Therefore, the evaluator must ensure that the user will not bump into walls or other physial objets, trip over ables, or move outside the range of the traking devie (Viirre, 1994). A related problem in surround-sreen VEs (suh as the CAVE) is that the physial walls an be dif ult to see beause of projeted graphis. Problems of this sort ould ontaminate the results of a usability evaluation (for example, if the user trips while in the midst of a timed task), and more importantly ould ause injury to the user. To mitigate risk, the evaluator an ensure that ables are bundled and will not get in the way of the user (for example, ables may desend from above). Also, the user may be plaed in a physial enlosure that limits movement to areas where there are no physial objets to interfere. Many VE displays do not allow multiple simultaneous viewers (suh as a user and evaluator), so equipment must be set up so that an evaluator an see the same image as the user. With an HMD, for example, this an be done by splitting the video signal and sending it to both the HMD and a monitor. In a surround-sreen or workbenh VE, a monosopi view of the sene ould be rendered to a monitor, or, if performane will not be adversely affeted, both the user and the evaluator an be traked. (This an ause other problems, however; see subsetion 2.2 on evaluator onsiderations.) If images are viewed on a monitor, it is dif ult to see both the ations of the user and the graphial environment at the same time, meaning that multiple evaluators may be neessary to observe and ollet data during an evaluation session. A ommon and very effetive tehnique for generating important qualitative data during usability evaluation sessions is the think-aloud protool as desribed by Hix and Hartson (1993). With this tehnique, subjets talk about their ations, goals, and thoughts regarding the interfae while they are performing spei tasks. In some VEs, however, voie reognition is used as an IT, rendering the think-aloud protool muh more dif ult and perhaps even impossible. Post-session interviews may help to reover some of the information that would have been obtained from the think-aloud protool. Another ommon tehnique involves reording video of both the user and the interfae as desribed by Hix and Hartson (1993). Beause VE users are often mobile, a single, xed amera may require a very wide shot, whih may not allow preise identi- ation of ations. This ould be addressed by using a traking amera (with, unfortunately, additional expense and omplexity) or a amera operator (additional personnel). Moreover, views of the user and the graphial environment must be synhronized so that ause and effet an learly be seen on the videotape. Finally, the problems of reording video of a stereosopi graphis image must be overome. An ever-inreasing number of proposed VE appliations are shared among two or more users (Normand & Tromp, 1996; Stiles et al., 1996). These ollaborative VEs beome even more dif ult to evaluate than single-user VEs beause of, for example, physial separation of users (that is, different users in more than one physial loation), the additional information that must be reorded for eah user, the unpreditability of network behavior as a fator in uening usability, the possibility that eah user will have different input and output devies, and the additional inherent omplexity of a multiuser system, whih may ause more frequent rashes or other tehnial problems. 2.2 Evaluator Issues A seond set of issues relates to the role of the evaluator in a VE usability evaluation. Beause of the omplexities and distintive harateristis of VEs, a usability study may require multiple evaluators, different evaluator roles and behaviors, or both. Following are some examples. Many VEs attempt to produe a sense of presene for the user; that is, a feeling of atually being in the

4 Bowman et al. 407 virtual world rather than the physial one (Witmer & Singer, 1998; Slater, 1999; Usoh, Catena, Arman, & Slater, 2000). Evaluators an ause breaks in presene if the user an sense them. In VEs using projeted graphis, the user will see an evaluator if the evaluator moves into the user s eld of view. This is espeially likely in a CAVE environment (Cruz-Neira, Sandin, DeFanti, Kenyon, & Hart, 1992) where it is dif ult for an evaluator to see the front of a user (for example, their faial expressions and detailed use of handheld devies) without affeting that user s sense of presene. This may break presene beause the evaluator is not part of the virtual world. In any type of VE, touhing or talking to the user an ause suh breaks. If the evaluation is assessing presene, or if presene is hypothesized to affet performane on the task being evaluated, then the evaluator must take are to remain unsensed during the evaluation. When maintaining presene is deemed very important for a partiular VE, an evaluator may not wish to intervene at all during an evaluation session. This means that the experimental appliation/interfae must be robust and bug-free, so that the session does not have to be interrupted to x a problem. Also, instrutions given to the user must be very detailed, expliit, and preise, and the evaluator should make sure the user has a omplete understanding of the proedure and tasks before beginning the session. VE hardware and software are often more omplex and less robust than traditional user interfae hardware and software (Kalawsky, 1993). Again, multiple evaluators may be needed for tasks suh as helping the user with display and input hardware, running the software that produes graphis and other output, reording data suh as timing and task errors, and reording ritial inidents and other qualitative observations of a user s ations. Traditional user interfaes typially require only a disrete, single stream of input (suh as from mouse and keyboard), but many VEs inlude multi-modal input, ombining disrete events, gestures, voie, and/or whole-body motion. It is muh more dif ult for an evaluator to proess these multiple input streams simultaneously and reord an aurate log of the user s ations. These hallenges make multiple evaluators and video even more important. 2.3 User Issues A large number of issues are related to the user population that is used as subjets in VE usability evaluations. In traditional evaluations, subjets are gleaned from the target user population of an appliation or from a similar representative group of people. Efforts are often made, for example, to preserve gender equity, to have a good distribution of ages, and to test both experts and novies if these differenes are representative of the target user population. The nature of VE evaluation, however, does not always allow for suh straightforward seletion of users. Following are some examples. VEs are still often a solution looking for a problem. Beause of this, the target user population for a VE appliation or IT to be evaluated may not be known or well understood. For example, a study omparing two virtual travel tehniques is not aimed at a partiular set of users. Thus, it may be dif ult to generalize performane results. The best ourse of ation is to evaluate the most diverse user population possible in terms of age, gender, tehnial ability, physial harateristis, and so on, and to inlude these fators in any models of performane. It may be impossible to differentiate between novie and expert users beause very few potential subjets ould be onsidered experts in VEs. Most users who ould be onsidered experts might be, for example, researh staff, whose partiipation in an evaluation ould onfound the results. Also, beause most users are typially novies, the evaluation itself may need to be framed at a lower ognitive and physial level. Unlike with GUIs, evaluators an make no assumptions about a novie user s ability to understand or use a given VE devie or IT. Beause VEs will be novel to many potential subjets, the results of an evaluation may exhibit high variability and differenes among individuals. This

5 408 PRESENCE: VOLUME 11, NUMBER 4 means that the number of subjets needed to obtain a good piture of performane may be larger than for traditional usability evaluations. If statistially signi ant results are required (depending on the type of usability evaluation being performed), the number of subjets needed may be even greater. Researhers are still studying a large design spae for VE ITs and devies. Beause of this, evaluations often ompare two or more tehniques, devies, or ombinations of the two. To perform suh evaluations using a within-subjets design, users must be able to adapt to a wide variety of situations. If a between-subjets design is used, a larger number of subjets will again be needed. VE evaluations must onsider the effets of simulator sikness and fatigue on subjets. Although some of the auses of simulator sikness are known, there are still no preditive models for simulator sikness (Kennedy, Stanney, & Dunlap, 2000), and little is known regarding aeptable exposure time to VEs. For evaluations, then, a worst-ase assumption must be made. A lengthy experiment (anything over 30 minutes, for example, might be onsidered lengthy, depending on the spei VE) must ontain planned rest breaks and ontingeny plans in ase of ill or fatigued subjets. Shortening the experiment is often not an option, espeially if statistially signi ant results are needed. Beause it is not known exatly what VE situations ause sikness or fatigue, most VE evaluations should inlude some measurement (subjetive, questionnaire-based (Kennedy, Lane, Berbaum, & Lilienthal, 1993), or physiologial) of these fators. A result indiating that an IT was 50% faster than any other evaluated tehnique would be severely misleading if that IT also made 30% of subjets sik. Thus, user omfort measurements should be inluded in low-level VE evaluations. Presene is another example of a measure often required in VE evaluations that has no analog in the evaluation of traditional user interfaes. VE evaluations must often take into aount subjetive reports of pereived presene, pereived delity of the virtual world, and so on. Questionnaires (Witmer & Singer, 1998; Slater, 1999; Usoh et al., 2000) have been developed that purportedly obtain reliable and onsistent measurements of suh fators. 2.4 Issues Related to Type of Usability Evaluation Traditional usability evaluation an take many forms, suh as informal user studies, formal experiments, task-based usability studies, heuristi evaluations, and the use of preditive models of performane. (See setion 3 for further disussion of these types of evaluations.) Several issues are related to the use of various types of usability evaluation in VEs. Following are some examples. Evaluations based solely on heuristis (that is, design guidelines), performed by usability experts, are very dif ult in VEs beause of a lak of published, veri ed guidelines for VE user interfae design. There are some notable exeptions (Bowman, 2002; Conkar, Noyes, & Kimble, 1999; Gabbard, 1997; Kaur, 1998; Kaur, Maiden, & Sutliffe, 1999; Mills & Noyes, 1999; Stanney & Reeves, 2000), and heuristi evaluation is a ritial step in assessing the usability of a VE interfae prior to studying real users attempting representative tasks in the VE. It is not likely that a large number of heuristis will appear at least until VE input and output devies beome more standardized. Even assuming standardized devies, however, the design spae for VE ITs and interfaes is very large, making it dif ult to produe effetive and general heuristis to use as the basis for evaluation. Another major type of usability evaluation that does not employ users is the appliation of performane models (for example, GOMS and Fitts Law). Again, suh models simply do not exist at this stage of VE development. However, the lower ost of both heuristi evaluation and performane model appliation makes them attrative for evaluation. Beause of the omplexity and novelty of VEs, the appliability or utility of automated, tool-based evaluation may be greater than it is for more-

6 Bowman et al. 409 traditional user interfaes. For example, automated usability evaluations ould redue the need for multiple evaluators in a single evaluation session. There are at least two possibilities for automated usability evaluation of VE user interfaes: rst, to automatially ollet and/or analyze data generated by one or more users in a VE, and, seond, to perform an analysis of an interfae design using an interative tool that embodies design guidelines (similar to heuristis). Some work has been done on automati olletion and analysis of data using spei types of repeating patterns in users data as indiators of potential usability problems (suh as Siohi and Hix (1991)). However, this work was performed on a typial GUI, and there appears to be no researh yet onduted that studies automated data olletion and evaluation of users data in VEs. Thus, differenes in the use of these kinds of data for VE usability evaluation have not been explored, but they would involve, at a minimum, ollating data from multiple users in a single session, possibly at different physial loations and even in different parts of the VE. At least one tool, MAUVE (Multi- Attribute Usability evaluation tool for Virtual Environments) inorporates design guidelines organized around several VE ategories suh as navigation, objet manipulation, input, output (suh as visual, auditory, hapti), and so on (Stanney, Mollaghasemi & Reeves, 2000). Within eah of these ategories, MAUVE presents a series of questions to an evaluator, who uses the tool to perform a multi-riteria, heuristi-style evaluation of a spei VE user interfae. When performing statistial experiments to quantify and ompare the usability of various VE ITs, input devies, interfae elements, and so on, it is often dif ult to know whih fators have a potential impat on the results. Besides the primary independent variable (suh as a spei IT), a large number of other potential fators ould be inluded, suh as environment, task, system, or user harateristis. One approah is to try to vary as many of these potentially important fators as possible during a single experiment. Suh testbed evaluation (Bowman, Johnson, & Hodges, 1999; Snow & Williges, 1998) (see subsetion 3.2) has been done with some suess. The other extreme would be to simply hold onstant as many of these other fators as possible and evaluate only in a partiular set of irumstanes. Thus, statistial VE experimental evaluations may be either overly simplisti or overly omplex; nding the proper balane is dif ult. 2.5 Other Issues Finally, there are at least two other issues that do not t easily into the ategories above. VE usability evaluations generally fous at a lower level than do traditional user interfae evaluations. In the ontext of GUIs, a standard look and feel and a standard set of interfae elements and ITs exist, so evaluation usually looks at subtle interfae nuanes or overall interfae metaphors. In the VE eld, however, there are no interfae standards, and not even a good understanding of the usability of various interfae types. Therefore, VE evaluations most often ompare lower-level omponents, suh as ITs or input devies. It is tempting to over-generalize the results of evaluations of VE interation performed in a generi (nonappliation) ontext. However, beause of the fast-hanging and omplex nature of VEs, one annot assume anything (display type, input devies, graphis proessing power, traker auray, and so on) about the harateristis of a real VE appliation. Everything has the potential to hange. Therefore, it is important to inlude information about the environment in whih the evaluation was performed and to evaluate in a range of environments (suh as by using different devies) if possible. 3 Current Evaluation Methods A review of reent VE literature indiates that a growing number of researhers and developers are onsidering usability at some level. Some are employing extensive usability evaluation tehniques with a arefully

7 410 PRESENCE: VOLUME 11, NUMBER 4 hosen, representative user base (for example, Hix et al. (1999)), whereas others undertake efforts that do not involve users, suh as review and inspetion by a usability expert (for example, Steed and Tromp (1998)). From the literature, we have ompiled a list of usability evaluation methods that have been applied to VEs. 1 Most of these methods were developed for 2D or GUI usability evaluation and have been subsequently extended to support VE evaluation. These methods inlude the following. Cognitive Walkthrough (for example, Polson, Lewis, Rieman, and Wharton (1992)): an approah to evaluating a user interfae based on stepping through ommon tasks that a user would perform and evaluating the interfae s ability to support eah step. This approah is intended espeially to help understand the usability of a system for rst-time or infrequent users, that is, for users in an exploratory learning mode. Formative Evaluation (both formal and informal) (for example, Sriven (1967) and Hix and Hartson (1993)): an observational, empirial evaluation method that assesses user interation by iteratively plaing representative users in task-based senarios in order to identify usability problems, as well as to assess the design s ability to support user exploration, learning, and task performane. Formative evaluations an range from being rather informal, providing mostly qualitative results suh as ritial inidents, user omments, and general reations, to being very formal and extensive, produing both qualitative and quantitative (for example, task timing, errors, and so on) results. Heuristi or Guidelines-Based Expert Evaluation (for example, Nielsen and Mak (1994)): a method in whih several usability experts separately evaluate a user interfae design (probably a prototype) by applying a set of heuristis or design guidelines that are relevant. No representative users are involved. Results from the several experts are then ombined 1. Although numerous referenes ould be ited for some of the tehniques we present, we have inluded itations that are most reognized and aessible. and ranked to prioritize iterative (re)design of eah usability issue disovered. Post-ho Questionnaire (for example, Hix and Hartson (1993)): a written set of questions used to obtain demographi information and views and interests of users after they have partiipated in a (typially formative) usability evaluation session. Questionnaires are good for olleting subjetive data and are often more onvenient and more onsistent than personal interviews. Interview/Demo (for example, Hix and Hartson (1993)): a tehnique for gathering information about users by talking diretly to them. An interview an gather more information than a questionnaire an and may go into a deeper level of detail. Interviews are good for getting subjetive reations, opinions, and insights into how people reason about issues. Strutured interviews have a de ned set of questions and responses. Open-ended interviews permit the respondent (interviewee) to provide additional information, ask broad questions without a xed set of answers, and explore paths of questioning that may our to the interviewer spontaneously during the interview. Demonstrations (typially of a prototype) may be used in onjuntion with user interviews to aid a user in talking about the interfae. Summative or Comparative Evaluation (both formal and informal) (for example, Sriven (1967) and Hix and Hartson (1993)): a statistial omparison of two or more on gurations of user interfae designs, user interfae omponents, and/or user ITs. As with formative evaluation, representative users perform task senarios as evaluators ollet both qualitative and quantitative data. As with formative evaluations, summative evaluations an be formally or informally applied. Several innovative approahes to evaluating VEs have employed one or more of the evaluation methods desribed above. Some of these approahes are shown in table 1. We hose this partiular set of researh literature to illustrate the wide range of methods and ombination

8 Bowman et al. 411 Table 1. Examples of VE Usability Evaluation from the Literature Researh example Bowman and Hodges (1997) Bowman, et al. (1999) Darken and Sibert (1996) Gabbard, Hix, and Swan (1999) Hix et al. (1999) Stanney & Reeves (2000) Steed and Tromp (1998) Slater, Usoh, and Steed (1995) Usability evaluation method(s) employed Informal summative Formal summative, interview Summative evaluation, post-ho questionnaire User task analysis, heuristi evaluation, Formative evaluation, summative evaluation User task analysis, heuristi evaluation, formative evaluation Heuristi evaluation, ognitive walkthrough Post-ho questionnaire of methods available for use; it is not intended to be exhaustive but rather representative. A loser look at these and other researh efforts shows that the type of evaluation method(s) used, as well as the manner in whih it was extended or applied, varies from study to study. It is not lear whether an evaluation method or set of methods an be reliably and systematially presribed given the wide range of design goals and user interfaes inherent in VEs. However, it is possible to lassify those methods that have been applied to VE evaluation to reveal ommon and distintive harateristis among methods. 3.1 Classi ation of VE Usability Evaluation Methods We have reated a novel lassi ation spae for VE usability evaluation methods. The lassi ation spae ( gure 1) provides a strutured means for omparing evaluation methods aording to three key harateristis: involvement of representative users, ontext of evaluation, and types of results produed. The rst harateristi disriminates between those methods that require the partiipation of representative users (to provide design or use-based experienes and feedbak) and those methods that do not (note that methods not requiring users still require a usability expert). The seond harateristi desribes the type of ontext in whih the evaluation takes plae. In partiular, this harateristi identi es those methods that are applied in a generi ontext and those that are applied in an appliation-spei ontext. The ontext of evaluation inherently imposes restritions on the appliability and generality of results. Thus, onlusions or results of evaluations onduted in a generi ontext an typially be applied more broadly (that is, to more types of interfaes) than an results of an appliation-spei evaluation method, whih may be best suited for appliations that are similar in nature. The third harateristi identi- es whether or not a given usability evaluation method produes (primarily) qualitative or quantitative results. Note that these harateristis are not designed to be mutually exlusive and are instead designed to onvey one (of many) usability evaluation method harateristi. For example, a partiular usability evaluation method may produe both quantitative and qualitative results. Indeed, many of the identi ed methods are exible enough to provide insight at many levels. We hose these three harateristis (over other potential harateristis) beause they are often the most signi ant (to evaluators) due to their overall effet on the usability proess. That is, a researher interested in undertaking usability evaluation will likely need to know what the evaluation will ost, what the impat of the evaluation will be, and how the results an be applied. Eah of the three harateristis addresses these onerns: degree of user involvement diretly affets the ost to plan, protor, and analyze the evaluation; results of the proess

9 412 PRESENCE: VOLUME 11, NUMBER 4 Figure 1. A Classi ation of Usability Evaluation Methods for VEs. indiate what type of information will be produed (for the given ost); and the ontext of evaluation inherently ditates to what extent and how results may be applied. This lassi ation is useful on several levels. It strutures the spae of evaluation methods and provides a pratial voabulary for disussion of methods in the researh ommunity. It also allows one to ompare two or more methods and understand how they are similar or different on a fundamental level. Finally, it reveals holes in the spae (Card, Makinlay, & Robertson, 1990), ombinations of the three harateristis that have not yet been tried in the VE ommunity. Figure 1 shows that there are two suh holes in our spae (the shaded boxes). Spei ally, there appear to be no urrent VE usability evaluation methods that do not require users and that an be applied in a generi ontext to produe quantitative results (upper right of gure 1). Note that some possible existing 2D and GUI evaluation methods are listed in parentheses, but these have not yet (to our knowledge) been applied to VEs. Similarly, there appears to be no method that provides quantitative results in an appliation-spei setting that does not require users (third box down on the right of gure 1). These areas may be interesting avenues for further researh. A shortoming of our lassi ation is that it does not onvey when in the software development life yle a method is best applied, or how several methods may be applied either in parallel or serially. In most ases, answers to these questions annot be determined without a omprehensive understanding of eah of the methods presented, as well as the spei goals and irumstanes of the VE researh or development effort. In the following subsetions, we present two welldeveloped VE evaluation approahes and ompare them in terms of pratial usage and results. 3.2 Testbed Evaluation Approah Bowman and Hodges (1999) take the approah of empirially evaluating ITs outside the ontext of appliations (that is, within a generi ontext, rather than

10 Bowman et al. 413 groups of users. These initial evaluation experienes are heavily drawn upon for the proesses of building a taxonomy, listing outside in uenes on performane, and listing performane measures. It is helpful, therefore, to gain as muh experiene of this type as possible so that good deisions an be made in the next phases of formalization. Figure 2. Bowman and Hodges (1999) Evaluation Approah. within a spei appliation), and add the support of a framework for design and evaluation, whih we summarize here. Prinipled, systemati design and evaluation frameworks give formalism and struture to researh on interation, rather than having the researher rely solely on experiene and intuition. Formal frameworks provide not only a greater understanding of the advantages and disadvantages of urrent tehniques, but also better opportunities to reate robust and well-performing new tehniques, based on knowledge gained through evaluation. Therefore, this approah follows several important evaluation onepts, whih are eluidated in the following subsetions. Figure 2 presents an overview of this approah Initial Evaluation. The rst step towards formalizing the design, evaluation, and appliation of ITs is to gain an intuitive understanding of the generi interation tasks in whih one is interested, and urrent tehniques available for the tasks. (See gure 2, area labeled 1.) This is aomplished through experiene using ITs and through observation and evaluation of Taxonomy. The next step is to establish a taxonomy ( gure 2, 2) of ITs for the interation task being evaluated. These taxonomies partition a task into separable subtasks, eah of whih represents a deision that must be made by the designer of a tehnique. In this sense, a taxonomy is the produt of a areful task analysis. One the task has been deomposed to a suf iently ne-grained level, the taxonomy is ompleted by listing possible tehnique omponents for aomplishing eah of the lowest-level subtasks. An IT omprises one tehnique omponent from eah of the lowest-level subtasks. For example, the task of hanging an objet s olor might be omposed of three subtasks: seleting an objet, hoosing a olor, and applying the olor. The subtask for hoosing a olor might have two possible tehnique omponents: hanging the values of R, G, and B sliders, or touhing a point within a 3D olor spae. The subtasks and their related tehnique omponents make up a taxonomy for the objet oloring task. Ideally, taxonomies established by this approah need to be orret, omplete, and general. Any IT that an be oneived for the task should t within the taxonomy. Thus, subtasks will neessarily be abstrat. The taxonomy will also list several possible tehnique omponents for eah of the subtasks, but it may not list every oneivable omponent. Building taxonomies is a good way to understand the low-level makeup of ITs and to formalize differenes between them, but, one they are in plae, they an also be used in the design proess. One an think of a taxonomy not only as a haraterization, but also as a design spae. Beause a taxonomy breaks the task down into separable subtasks, a wide range of designs an be onsidered quikly, simply by trying different ombinations of tehnique omponents for eah of the subtasks. There is no guarantee that a given ombination will

11 414 PRESENCE: VOLUME 11, NUMBER 4 make sense as a omplete IT, but the systemati nature of the taxonomy makes it easy to generate designs and to rejet inappropriate ombinations Outside Fators. ITs annot be evaluated in a vauum. A user s performane on an interation task may depend on a variety of fators ( gure 2, 3), of whih the IT is but one. For the evaluation framework to be omplete, suh fators must be inluded expliitly and used as seondary independent variables in evaluations. Bowman and Hodges (1999) identi ed four ategories of outside fators. First, task harateristis are those attributes of the task that may affet user performane, suh as distane to be traveled or size of the objet being manipulated. Seond, the approah onsiders environment harateristis, suh as the number of obstales and the level of ativity or motion in the VE. User harateristis, inluding ognitive measures suh as spatial ability or physial attributes suh as arm length, may also ontribute to user performane. Finally, system harateristis may be signi ant, suh as the lighting model used or the mean framerate Performane Metris. This approah is designed to obtain information about human performane in ommon VE interation tasks, but what is performane? Speed and auray are easy to measure, are quantitative, and are learly important in the evaluation of ITs, but many other performane metris ( gure 2, 4) must also be onsidered. Thus, this approah also onsiders more subjetive performane values, suh as pereived ease of use, ease of learning, and user omfort. For VEs in partiular, presene (Witmer & Singer, 1998) might be a valuable measure. The hoie of IT ould oneivably affet all of these, and they should not be disounted. Also, more than any other urrent omputing paradigm, VEs involve the user s senses and body in the task. Thus, a fous on user-entri performane measures is essential. If an IT does not make good use of human skills or if it auses fatigue or disomfort, it will not provide overall usability despite its performane in other areas Testbed Experiments. Bowman and Hodges (1999) use testbed evaluation ( gure 2, 5) as the nal stage in the evaluation of ITs for VE interation tasks. This approah allows generi, generalizable, and reusable evaluation through the reation of testbeds: environments and tasks that involve all important aspets of a task, that evaluate eah omponent of a tehnique, that onsider outside in uenes (fators other than the IT) on performane, and that have multiple performane measures. A testbed experiment uses a formal, fatorial experimental design, and normally requires a large number of subjets. If many ITs or outside fators are inluded in the evaluation, the number of trials per subjet an beome overly large, so ITs are usually a between-subjets variable (eah subjet uses only a single IT), whereas other fators are withinsubjets variables. Testbed evaluations have been performed for the tasks of travel and seletion/manipulation (Bowman et al., 1999) Results of Testbed Evaluation. Testbed evaluation produes a set of results or models ( gure 2, 6) that haraterize the usability of an IT for the spei- ed task. Usability is given in terms of multiple performane metris, with respet to various levels of outside fators. These results beome part of a performane database for the interation task, with more information being added to the database eah time a new tehnique is run through the testbed. These results an also be generalized into heuristis or guidelines ( gure 2, 7) that an easily be evaluated and applied by VE developers. The last step is to apply the performane results to VE appliations ( gure 2, 8), with the goal of making them more useful and usable. To hoose ITs for appliations appropriately, one must understand the interation requirements of the appliation. There is no single best tehnique beause the tehnique that is best for one appliation will not be optimal for another appliation with different requirements. Therefore, appliations need to speify their interation requirements before the most-appropriate ITs an be hosen. This spei ation is done in terms of the performane metris that have already been de ned as part of the formal framework.

12 Bowman et al. 415 One the requirements are in plae, the performane results from testbed evaluation an be used to reommend ITs that meet those requirements Case Studies. Although testbed evaluation ould be applied to almost any type of interative system, it is espeially appropriate for VEs beause of its fous on low-level interation tehniques. Testbed experiments have been performed omparing tehniques for the tasks of travel (Bowman et al., 1999) and seletion/manipulation (Bowman & Hodges, 1999). The travel testbed experiment ompared seven different travel tehniques for the tasks of naõ ve searh and primed searh. In the primed searh trials, the initial visibility of the target and the required auray of movement were also varied. The dependent variables were time for task ompletion and subjetive user omfort ratings. Forty-four subjets partiipated in the experiment. Both demographi and spatial ability information for eah subjet were gathered. The seletion/manipulation testbed ompared the usability and performane of nine different interation tehniques. For seletion tasks, the independent variables were distane from the user to the objet, size of the objet, and density of distrater objets. For manipulation tasks, the required auray of plaement, the required degrees of freedom, and the distane through whih the objet was moved were varied. The dependent variables in this experiment were the time for task ompletion, the number of seletion errors, and subjetive user omfort ratings. Forty-eight subjets partiipated, and we again obtained demographi data and spatial ability sores. In both instanes, the testbed approah produed unexpeted and interesting results that would not have been revealed by a simpler experiment. For example, in the seletion/manipulation testbed, it was found that seletion tehniques using an extended virtual hand performed well with larger, nearer objets and more poorly with smaller, farther objets, whereas seletion tehniques based on ray-asting performed well regardless of objet size or distane. The testbed environments and tasks have also proved to be reusable. The authors are aware of one researher who is evaluating a new interation tehnique for travel using the travel testbed, and another who is evaluating manipulation performane using two different VE display devies in the manipulation testbed, but results are not publishable as of this writing. 3.3 Sequential Evaluation Approah Gabbard, Hix, and Swan (1999) present a sequential approah to usability evaluation for spei VE appliations. The sequential evaluation approah is a usability engineering approah, and it addresses both design and evaluation of VE user interfaes. However, for the sope of this paper, we fous on different types of evaluation and address analysis, design, and prototyping only when they have a diret effet on evaluation. Although some of its omponents are well suited for the evaluation of generi ITs, the omplete sequential evaluation approah employs appliation-spei guidelines, domain-spei representative users, and appliation-spei user tasks to produe a usable and useful interfae for a partiular appliation. In many ases, results or lessons learned may be applied to other, similar appliations (for example, VE appliations with similar display or input devies, or with similar types of tasks), and, in other ases (albeit less often), it is possible to abstrat the results to generi ases. Sequential evaluation evolved from iteratively adapting and enhaning existing 2D and GUI usability evaluation methods. In partiular, we modi ed and extended spei methods to aount for omplex ITs, nonstandard and dynami user interfae omponents, and multimodal tasks inherent in VEs. Moreover, the adapted/ extended methods both streamlined the usability engineering proess and provided suf ient overage of the usability spae. Although the name implies that the various methods are applied in sequene, there is onsiderable opportunity to iterate both within a partiular method as well as among methods. It is important to note that all the piees of this approah have been used for years in GUI usability evaluations. The unique ontribution of the Gabbard et al. (1999) work is the breadth and depth offered by progressive use of these tehniques, adapted when neessary for VE evaluation,

13 416 PRESENCE: VOLUME 11, NUMBER 4 Figure 3. Gabbard, Hix, and Swan s (1999) Sequential Evaluation Approah. in an appliation-spei ontext. Further, the way in whih eah step in the progression informs the next step is an important nding, as disussed near the end of this setion. Figure 3 presents the sequential evaluation approah. It allows developers to improve a VE s user interfae by a ombination of expert-based and user-based tehniques. This approah is based on sequentially performing user task analysis (see gure 3, 1), heuristi (or guidelines-based expert) evaluation ( gure 3, 2), formative evaluation ( gure 3, 3), and summative evaluation ( gure 3, 4), with iteration as appropriate within and among eah type of evaluation. This approah leverages the results of eah individual method by systematially de ning and re ning the VE user interfae in a osteffetive progression. Depending upon the nature of the appliation, this sequential evaluation approah may be applied in a stritly serial approah (as gure 3 s solid blak arrows illustrate) or iteratively applied (either as a whole or per individual method, as gure 3 s white arrows illustrate) many times. For example, when used to evaluate a omplex ommand and ontrol battle eld visualization appliation (Hix et al., 1999), user task analysis was followed by signi ant iterative use of heuristi and formative evaluation, and lastly followed by a single, broad summative evaluation. From experiene, this sequential evaluation approah provides ost-effetive assessment and re nement of usability for a spei VE appliation. Obviously, the exat ost and bene t of a partiular evaluation effort depends largely on the appliation s omplexity and maturity. In some ases, ost an be managed by performing quik and lightweight formative evaluations (whih involve users and thus are typially the most time-onsuming to plan and perform). Moreover, by using a hallway methodology (Nielsen, 1999), userbased methods an be performed quikly and ost effetively by simply nding volunteers from within one s own organization. This approah should be used only as a last resort, or in ases in whih the representative user lass inludes just about anyone. When used, are should be taken to ensure that hallway users provide a lose representative math to the appliation s ultimate users. Although eah of the individual methods in the sequential evaluation approah are well known to those within the usability engineering ommunity, they have not been used widely in the VE ommunity. Therefore, we desribe the methods in more detail, with partiular attention to how they have been adapted for VEs User Task Analysis. A user task analysis (for example, Hakos and Redish (1998)) provides the basis for design in terms of what users need to be able to do with the VE appliation. This analysis generates (among other resoures) a list of detailed task desriptions, sequenes, and relationships, user work, and in-

14 Bowman et al. 417 formation ow ( gure 3, A). Typially a user task analysis is provided by a VE design and development team, based on extensive input from representative users. Whenever possible, it is useful for an evaluator to partiipate in the user task analysis. The user task analysis also shapes representative user task senarios ( gure 3, D) by de ning, ordering, and ranking user tasks and task ow. The auray and ompleteness of a user task analysis diretly affets the quality of the subsequent formative and summative evaluations beause these methods typially do not reveal usability problems assoiated with a spei interation within the appliation unless it is inluded in the user task senario (and is therefore performed by users during evaluation sessions). Similarly, to evaluate how well an appliation s interfae supports high-level information gathering and proessing, representative user task senarios must inlude more than simply atomi, mehanial- or physial-level tasking, but should also inlude high-level ognitive, problem-solving tasking that is spei to the appliation domain. This is espeially important in VEs, in whih user tasks generally are inherently more omplex, dif ult, and unusual than in, for example, many GUIs. Task analysis is a ritial ativity in usability engineering, driving all subsequent ativities in the usability engineering proess. Unfortunately, based on our experienes, it is often overlooked Heuristi Evaluation. A heuristi evaluation or guidelines-based expert evaluation may be the rst assessment of an interation design based on the user task analysis and appliation of guidelines for VE user interfae design. One of the goals of heuristi evaluation is simply to identify usability problems in the design. Another important goal is to identify usability problems early in the development life yle so that they may be addressed, and the redesign iteratively re ned and evaluated (Nielsen & Mak, 1994). In a heuristi evaluation, VE usability experts ompare elements of the user interation design to guidelines or heuristis ( gure 3, B), looking for spei situations in whih guidelines have been violated and are therefore potential usability problems. The evaluation is performed by one or (preferably) more usability experts and does not require users. A set of usability guidelines or heuristis that are either general enough to apply to any VE or are tailored for a spei VE is also required. Heuristi evaluation is extremely useful as it has the potential to identify many major and minor usability problems. Nielsen (1993) found that approximately 80% (between 74% and 87%) of a GUI design s usability problems may be identi ed when three to ve expert evaluators are used. Moreover, the probability of nding a given major usability problem may be as great as 71% when only three evaluators are used. From experiene, heuristi evaluation of VE user interfaes provides similar results; however, the urrent lak of well-formed guidelines and heuristis for VE user interfae design and evaluation make this approah more hallenging for VEs. Nonetheless, it is still a very ost-effetive method for early assessment of VEs and helps unover usability problems that, if not disovered via a heuristi evaluation, will very likely be disovered in the muh more ostly formative evaluation proess. In fat, one of the strengths of the sequential evaluation approah is that usability problems identi ed during heuristi evaluations an be deteted and orreted prior to performing formative evaluations. This approah reates a streamlined user interfae design ( gure 3, C) that may be more rigorously studied in subsequent evaluations. Therefore, this approah leads to formative evaluation that is more ost effetive and ef ient than a formative evaluation that is not based on a doumented user task analysis and heuristi evaluation. In most ases, this approah avoids the situation in whih an iteration of formative evaluation is expended simply to expose obvious and glaring usability problems. A formative evaluation following a heuristi evaluation an fous not on major usability issues, but rather on those issues that are more subtle and more dif ult to reognize. This is espeially important beause of the ost of VE development. One both major and minor usability problems are identi ed, further assessment is needed to understand how partiular interfae omponents may affet user performane. To fous subsequent evaluations on these identi ed usability issues, evaluators use results of both the heuristi evaluation and the task analysis as the basis for representative user task senarios ( gure 3, D). For

15 418 PRESENCE: VOLUME 11, NUMBER 4 example, if heuristi evaluation identi es a possible mismath between implementation of a voie reognition system and manipulation of user viewpoint, then senarios requiring users to manipulate the viewpoint would be inluded in subsequent formative evaluations Formative Evaluation. Formative or userentered evaluation (Sriven, 1967) is a type of evaluation that is applied during evolving or formative stages of design to ensure that the design meets its stated objetives and goals. Williges (1984) and Hix and Hartson (1993) extended formative evaluation to support evaluation of GUI user interfaes. The method relies heavily on usage ontext (suh as user tasks, user lasses, and user motivation), as well as a solid understanding of human omputer interation (and in the ase of VEs, human VE interation). The purpose of formative evaluation is to iteratively assess and improve the usability of an evolving user interfae design. A typial formative evaluation yle may begin with development of user task senarios that are spei ally designed to explore many faets of a user interfae design. Task senarios should provide ample overage of tasks identi ed during a user task analysis. Representative users are reruited to work through the task senarios as evaluators observe and ollet data. Experiened usability evaluators follow a strutured and sienti approah to data olletion, resulting in large volumes of both qualitative and quantitative data. Both types of olleted data are equally important parts of the formative evaluation proess; quantitative data indiate that a user performane issue is present, qualitative data indiate where (and sometimes why) it ourred. Colleted data are analyzed to identify user interfae omponents that both support and detrat from user task performane and user satisfation. Alternating between formative evaluation and (re)design efforts ultimately leads to an iteratively re ned user interfae design ( gure 3, E). Re ning the user interfae design suh that it ef iently and effetively supports all user tasks ensures that eah omparison in a subsequent summative evaluation is fair (that is, eah design in the summative study is as good as it an possibly be in terms of usability) Summative Evaluation. Summative or omparative evaluation is an assessment and statistial omparison of two or more on gurations of user interfae designs, user interfae omponents, and/or ITs. Summative evaluation is generally performed after user interfae designs (or omponents) are omplete, and it is a traditional fatorial experimental design with multiple independent variables. Summative evaluation enables evaluators to measure and subsequently ompare the produtivity and ost bene ts assoiated with different user interfae designs. Comparing VE user interfaes requires a onsistent set of user task senarios (borrowed and/or re ned from the formative evaluation effort), resulting in primarily quantitative data results that ompare (on a task-by-task basis) a design s support for spei user task performane. A major impat of the formative to summative progression is that results from formative evaluations inform design of summative studies by helping to determine appropriate usability harateristis to evaluate and ompare in summative studies. Invariably, numerous alternatives an be onsidered as fators in a summative evaluation. Formative evaluations typially point out the most important usability harateristis and issues (suh as those that reur most frequently, those that have the largest effet on user performane and/or satisfation, and so on). These issues then beome strong andidates for inlusion in a summative evaluation. For example, if formative evaluation showed that users have a problem with format or plaement of textual information in a heavily graphial display, a summative evaluation ould explore alternative ways of presenting suh textual information. As another example, if users (or developers) want a number of different display modes suh as stereosopi and monosopi, head-traked and stati, landsape view and overhead view of a map these various on gurations an also be the basis of rih omparative studies related to usability Case Studies. The sequential evaluation approah has been applied to several VEs, inluding the Naval Researh Lab s Dragon appliation, a VE for battle eld visualization (Gabbard et al., 1999). Dragon is presented on a responsive workbenh that provides a

16 Bowman et al D display for observing and managing battlespae information shared among ommanders and other battle planners. We performed several evaluations over a ninemonth period, using one to three users and two to three evaluators per session. Eah evaluation session revealed a set of usability problems and generated a orresponding set of reommendations. The developers would address the reommendations and produe an improved user interfae for the next iteration of evaluation. We performed four major yles of iteration during our evaluation of Dragon, with eah yle using the progression of usability methods desribed in this setion. During the expert guidelines-based evaluations, various user interation design experts worked alone or olletively to assess the evolving user interation design for Dragon. These expert evaluations unovered several major design problems that are desribed in detail by Hix et al. (1999). Based on our user task analysis and early expert guidelines-based evaluations, we reated a set of user task senarios spei ally for battle eld visualization. During eah formative session, at least two and often three evaluators were present. Although both the expert guidelines-based evaluation sessions and the formative evaluation sessions were personnel intensive (with two or three evaluators involved), we found that the quality and amount of data olleted by multiple evaluators greatly outweighed the ost of those evaluators. Finally, the summative evaluation statistially examined the effet of four fators: loomotion metaphor (ego- versus exoentri), gesture ontrol (ontrols rate versus ontrols position), visual presentation devie (workbenh, desktop, CAVE), and stereopsis (present versus not present). The results of these efforts are being nalized and are forthoming. Other ase studies that desribe our experienes with sequential usability evaluation are available in Hix and Gabbard (2002). 4 Comparison of Approahes The two major evaluation methods we have presented for VEs testbed evaluation and sequential evaluation take quite different approahes to the same problem, namely, how to improve usability in VE appliations. At a high level, these approahes an be haraterized in the spae de ned in setion 3. Sequential evaluation is performed in the ontext of a partiular appliation and an have both quantitative and qualitative results. Testbed evaluation is done in a generi evaluation ontext and usually seeks quantitative results. Both approahes employ users in evaluation. In this setion, we take a more detailed look at the similarities of and differenes between these two approahes. We organize this omparison by answering several key questions about eah of the methods: What are the goals of the approah? When should the approah be used? In what situations is the approah useful? What are the osts of using the approah? What are the bene ts of using the approah? How are the approah s evaluation results applied? Many of these questions an be asked of other evaluation methods, and perhaps should be asked prior to designing a usability evaluation. Indeed, answers to these questions may help identify appropriate evaluation methods, given spei researh, design, or development goals. Future work should attempt to nd valid answers to these and other related questions regarding different usability evaluation methods. Another possibility is to understand the general properties, strengths, and weaknesses of eah approah so that the two approahes an be linked in omplementary ways. 4.1 What Are the Goals of the Approah? As mentioned, both approahes ultimately aim to improve usability in VE appliations. However, there are more spei goals that exhibit differenes between the two approahes. Testbed evaluation has the spei goal of nding generi performane harateristis for VE ITs. This means that one wants to understand IT performane in a high-level, abstrat way, not in the ontext of a partiular VE appliation. This goal is important beause, if ahieved, it an lead to wide appliability of the results.

17 420 PRESENCE: VOLUME 11, NUMBER 4 To perform generi evaluation, the testbed approah is limited to general tehniques for ommon, universal tasks (suh as navigation, seletion, or manipulation). To say this in another way, testbed evaluation is not designed to evaluate speial-purpose tehniques for spei tasks, suh as applying a texture. Rather, it abstrats away from these spei s, using generi properties of the task, user, environment, and system. Sequential evaluation s immediate goal is to iterate towards a better user interfae for a partiular appliation in this ase, a spei VE appliation. It looks very losely at partiular user tasks of an appliation to determine whih senarios and ITs should be inorporated. In general, this approah tends to be quite spei and produes a near-optimal interfae design for a partiular appliation under development. 4.2 When Should the Approah Be Used? By its non-appliation-spei nature, the testbed approah atually falls ompletely outside the design yle of a partiular appliation. Ideally, testbed evaluation should be ompleted before an appliation is even a glimmer in the eye of a developer. Beause it produes general performane/usability results for ITs, these results an be used as a starting point for the design of new VE appliations. On the other hand, sequential evaluation should be used early and ontinually throughout the design yle of a VE appliation. User task analysis is neessary before the rst interfae prototypes are built. Heuristi and formative evaluations of prototypes produe reommendations that an be applied to subsequent design iterations. Summative evaluations of different design possibilities an be done when the hoie of design (for example, for ITs) is not lear. The distint time periods in whih testbed evaluation and sequential evaluation are employed suggests that ombining the two approahes is possible and even desirable. Testbed evaluation an rst produe a set of general results and guidelines that an serve as an advaned and well-informed starting point for a VE appliation s user interfae design. Sequential evaluation an then re ne that initial design in a fashion that is more appliation-spei. 4.3 In What Situations Is the Approah Useful? Testbed evaluation allows the researher to understand detailed performane harateristis of ommon ITs, espeially user performane. It provides a wide range of performane data that may be appliable to a variety of situations. In a development effort that requires a suite of appliations with ommon ITs and interfae elements, testbed evaluation ould provide a quantitative basis for hoosing them, beause developers ould hoose ITs that performed well aross the range of tasks, environments, and users in the appliations; their hoies would be supported by empirial evidene. As we have said, the sequential evaluation approah should be used throughout the design yle of a VE appliation, but it is espeially useful in the early stages of interfae design. Beause sequential evaluation produes results even on very low- delity prototypes or design spei ations, a VE appliation s user interfae an be re ned muh earlier, resulting in greater ost savings. Also, the earlier this approah is used in development, the more time remains for produing design iterations, whih ultimately results in a better produt. This approah also makes the most sense when a user task analysis has been performed. This analysis will suggest task senarios that make evaluation more meaningful and effetive. 4.4 What Are the Costs of Using the Approah? The testbed evaluation approah an be seen as very ostly and is de nitely not appropriate for every situation. In ertain senarios, however, its bene ts (see subsetion 4.5) an make the extra effort worthwhile. Some of the most important osts assoiated with testbed evaluation inlude dif ult experimental design (many independent and dependent variables, where some of the ombinations of variables are not testable), experiments requiring large numbers of trials to ensure

18 Bowman et al. 421 signi ant results, and large amounts of time spent running experiments beause of the number of subjets and trials. One an experiment has been onduted, the results may not be as detailed as some developers would like. Beause testbed evaluation looks at generi VE situations, information on spei interfae details suh as labeling, the shape of ions, and so on will not usually be available. In general, the sequential evaluation approah may be less ostly than testbed evaluation beause it an fous on a partiular VE appliation rather than paying the ost of abstration. However, some important osts are still assoiated with this method. Multiple evaluators may be needed. Development of representative user task senarios is essential. Conduting the evaluations themselves may be ostly in terms of time, depending on the omplexity of task senarios. Most importantly, beause this is part of an iterative design effort, time spent by developers to inorporate suggested design hanges after eah round of evaluation must be onsidered. 4.5 What Are the Bene ts of Using the Approah? Beause testbed evaluation is so ostly, its bene ts must be signi ant before it beomes a useful evaluation method. One suh bene t is the generality of the results. Beause testbed experiments are onduted in a generalized ontext, the results may be applied many times in many different types of appliations. Of ourse, a ost is assoiated with eah use of the results beause the developer must deide whih results are relevant to a spei VE. Seondly, testbeds for a partiular task may be used multiple times. When a new IT is proposed, that tehnique an be run through the testbed and ompared with tehniques already evaluated. The same set of subjets is not neessary beause testbed evaluation usually uses a between-subjets design. Finally, the generality of the experiments lends itself to the development of general guidelines and heuristis. It is more dif ult to generalize from experiene with a single appliation. For a partiular appliation, the sequential evaluation approah an be very bene ial. Although it does not produe reusable results or general priniples in the same broad sense as testbed evaluation, it is likely to produe a more re ned and usable VE than if the results of testbed evaluation were applied alone. Another of the major bene ts of this method relates to its involvement of users in the development proess. Beause members of the representative user group take part in many of the evaluations, the VE is more likely to be tailored to their needs, and should result in higher user aeptane and produtivity, redued user errors, inreased user satisfation, and so on. There may be some reuse of results for other appliations with similar tasks or requirements or use of re ned ITs produed by the proess. 4.6 How Are the Approah s Evaluation Results Applied? The results of testbed evaluation are appliable to any VE that uses the tasks studied with a testbed. Currently, testbed results are available for some of the most ommon tasks in VEs: travel and seletion/manipulation (Bowman et al., 1999). The results an be applied in two ways. The rst, informal, tehnique is to use the guidelines produed by testbed evaluation in hoosing ITs for an appliation (as by Bowman et al. (1999)). A more formal tehnique uses the requirements of the appliation (spei ed in terms of the testbed s performane metris) to hoose the IT losest to those requirements. Both of these approahes should produe a set of ITs for the appliation that makes it more usable than the same appliation designed using intuition alone. However, beause the results are so general, the VE will almost ertainly require further re nement. Appliation of results of the sequential evaluation approah is muh more straightforward. Heuristi and formative evaluations produe spei suggestions for hanges to the appliation s user interfae or ITs. The result of summative evaluation is an interfae or set of ITs that performs the best or is the most usable in a omparative study. In any ase, results of the evaluation are tied diretly to hanges in the interfae of the VE appliation.

19 422 PRESENCE: VOLUME 11, NUMBER 4 5 Conlusions Clearly, performing usability evaluation on nontraditional interative systems requires new approahes, tehniques, and insights. Although VE evaluation at its highest level retains the same goals and oneptual foundation as its GUI predeessors, the pratial matter of performing atual evaluations an be quite different. This paper has surveyed urrent usability evaluation approahes for VEs; its ontributions inlude a list of distintive harateristis of VE evaluation, a lassi ation spae for evaluation approahes, and a set of questions that an be used to ompare approahes. There is still muh work to be done in the area of VE usability evaluation. One avenue of researh is the ombination of multiple approahes. Based on our analysis of the testbed evaluation and sequential evaluation approahes to VE evaluation, we have found that these approahes an in uene and affet one another when used together as part of a broader approah. To this end, we have identi ed a number of ways in whih the results of one approah an be used to strengthen and re ne the other. For example, the results of testbed evaluation an be generalized to produe heuristis for use in the heuristi evaluation stage of the sequential evaluation approah. In addition, ertain VE interation tasks have not been explored suf iently. For example, the task of VE system ontrol, in whih the user wishes to issue a ommand or hange the state of the system in some way, is not well understood. Generi evaluations of various system ontrol tehniques would be highly useful to the VE ommunity. Analysis of other usability evaluation approahes in terms of the questions posed in setion 4 would also be useful. Answers to these and similar questions, for a broader variety of evaluation approahes, an greatly inrease the effetiveness and ef ieny of performing suh evaluations. Suh results ould help expand the breadth and depth of usability evaluations performed on VE user interfaes. Finally, VE interfae design guidelines, based on evaluation results, are needed. It is a reality that many VE developers do not hoose to perform full usability studies on their systems, making the availability of useful and pratial guidelines for VE interfae design invaluable. Aknowledgments Portions of this work were funded by the Of e of Naval Researh, Dr. Helen M. Gigley and Dr. Astrid Shmidt-Nielsen, Program Managers. Dr. Gigley and Dr. Shmidt-Nielsen have funded an on-going ollaboration between Virginia Teh and the Naval Researh Laboratory (NRL) in Washington, DC for several years. Dr. Ed Swan of the Naval Researh Laboratory has been a lose ollaborator on muh of this work, whih was also supported by Dr. Larry Rosenblum of NRL. Paul Quinn and Gary Toth, also of the Of e of Naval Researh, have reently provided some funding for our efforts. Dr. Rihard E. Nane, of Virginia Teh s Systems Researh Center, has given muh moral support to our researh. Dr. Larry F. Hodges of Georgia Teh was instrumental in researh on testbed evaluation. We would also like to thank Donald Johnson, Don Allison, and Drew Kessler for their help and support. We are grateful to all these ontributors, without whom this large body of work would not have been possible. Referenes Bowman, D. (2002). Priniples for the design of performaneoriented interation tehniques. In K. Stanney (Ed.), Handbook of virtual environments: Design, implementation, and appliations. Mahwah NJ: Lawrene Erlbaum Assoiates Bowman, D., & Hodges, L. (1999). Formalizing the design, evaluation, and appliation of interation tehniques for immersive virtual environments. The Journal of Visual Languages and Computing, 10(1), Bowman, D., & Hodges, L. (1997). An evaluation of tehniques for grabbing and manipulating remote objets in immersive virtual environments. Proeedings of the ACM Symposium on Interative 3D Graphis, Bowman, D., Johnson, D., & Hodges, L. (1999). Testbed evaluation of VE interation tehniques. Proeedings of the ACM Symposium on Virtual Reality Software and Tehnology (pp ). Card, S., Makinlay, J., & Robertson, G. (1990). The design spae of input devies. Proeedings of CHI: Human Fators in Computing Systems (pp ).

20 Bowman et al. 423 Cruz-Neira, C., Sandin, D., DeFanti, T., Kenyon, R., & Hart, J. (1992). The CAVE: Audio visual experiene automati virtual environment. Communiations of the ACM, 35(6), Conkar, T, Noyes, J. M., & Kimble, C. (1999) CLIMATE: A framework for developing holisti requirements analysis in virtual environments. Interating with Computers, 11(4), Darken, R. P., & Sibert, J. L. (1996). Way nding strategies and behaviors in large virtual worlds. Proeedings of CHI: Human Fators in Computing Systems (pp ). Gabbard, J. L., Swan, E. J., Hix, D., Lanzagorta, M., Livingston, M., Brown, D., & Julier, S. (2002, in press). Usability engineering: Domain analysis ativities for augmented reality systems. Proeedings of SPIE Photonis West 2002, Eletroni Imaging Conferene. Gabbard, J. L., Hix, D, & Swan, E. J. (1999). User entered design and evaluation of virtual environments. IEEE Computer Graphis and Appliations, 19(6), Gabbard J. L. (1997). A taxonomy of usability harateristis for virtual environments. Unpublished master s thesis, Department of Computer Siene, Virginia Teh. Hakos, J. T., & Redish, J. C. (1998). User and task analysis for interfae design. New York: Wiley. Hix, D., & Gabbard, J. L. (2002). Usability engineering of virtual environments. In K. Stanney (Ed.), Handbook of virtual environments: Design, implementation and appliations (pp ). Mahwah, NJ: Lawrene Erlbaum Assoiates. Hix, D., & Hartson, H. R. (1993). Developing user interfaes: Ensuring usability through produt & proess. New York: John Wiley and Sons. Hix, D., Swan, E. J., Gabbard, J. L., MGee, M., Durbin, J., & King, T. (1999). User-entered design and evaluation of a real-time battle eld visualization virtual environment. Proeedings of IEEE Virtual Reality (pp ). Johnson, C. (1999). Evaluating the ontribution of desktopvr for safety-ritial appliations. Proeedings of SAFE- COMP (pp ) Kalawsky, R. (1993). The siene of virtual reality and virtual environments. Reading, MA: Addison-Wesley. Kaur, K. (1998). Designing virtual environments for usability. Unpublished dotoral dissertation, Centre for HCI Design, City University, London. Kaur, K., Maiden, N., & Sutliffe, A. (1999). Interating with virtual environments: An evaluation of a model of interation. Interating with Computers, 11(4), Kennedy, R. S., Lane, N. E., Berbaum, K. S., & Lilienthal, M. G. (1993). Simulator sikness questionnaire (SSQ): A new method for quantifying simulator sikness. International Journal of Aviation Psyhology, 3(3), Kennedy, R. S., Stanney, K., & Dunlap, W. (2000). Duration and exposure to virtual environments: Sikness urves during and aross sessions. Presene: Teleoperators and Virtual Environments, 9(5), Mills, S., & Noyes, J. (1999). Virtual reality: An overview of user-related design issues. Interating with Computers, 11(4), Nielsen, J. (1993). Usability engineering. Boston: Aademi Press. Nielsen, J. (1999). Users rst: Cheap usability tests. Available at: 0,4413, ,00.html Nielsen, J., & Mak, R. L. (1994). Exeutive summary. In J. Nielsen & R. L. Mak (Eds.), Usability Inspetion Methods. (pp. 1 23). New York: John Wiley & Sons. Normand, V., & Tromp, J. (1996). Collaborative virtual environments: The COVEN projet. Proeedings of Framework for Immersive Virtual Environments. Polson, P., Lewis, C., Rieman, J., & Wharton, C. (1992). Cognitive walkthroughs: A method for theory-based evaluation of user interfaes. International Journal of Man- Mahine Studies, 36, Poupyrev, I., Weghorst, S., Billinghurst, M., & Ihikawa, T. (1997). A framework and testbed for studying manipulation tehniques for immersive VR. Proeedings of the ACM Symposium on Virtual Reality Software and Tehnology (pp ). Shneiderman, B. (1992). Developing the User Interfae. Reading, MA: Addison-Wesley. Sriven, M. (1967). The methodology of evaluation. In R. E. Stake (Ed.), Perspetives of urriulum evaluation. Amerian Eduational Researh Assoiation monograph Chiago: Rand MNally. Siohi, A. C., & Hix, D. (1991). A study of omputersupported user interfae evaluation using maximal repeating pattern analysis. Proeedings of CHI: Human Fators in Computing Systems (pp ). Slater, M. (1999). Measuring presene: A response to the Witmer and Singer Presene Questionnaire. Presene: Teleoperators and Virtual Environments, 8(5), Slater, M., Usoh, M., & Steed, A. (1995). Taking steps: The in uene of a walking metaphor on presene in virtual real-

Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

Usability Evaluation in Virtual Environments: Classification and Comparison of Methods Usability Evaluation in Virtual Environments: Classification and Comparison of Methods Abstract Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive

More information

A Holistic Method for Selecting Web Services in Design of Composite Applications

A Holistic Method for Selecting Web Services in Design of Composite Applications A Holisti Method for Seleting Web Servies in Design of Composite Appliations Mārtiņš Bonders, Jānis Grabis Institute of Information Tehnology, Riga Tehnial University, 1 Kalu Street, Riga, LV 1658, Latvia,

More information

Interaction-Driven Virtual Reality Application Design

Interaction-Driven Virtual Reality Application Design Nar s Parés npares@iua.upf.es Ro Parés rpares@iua.upf.es Audiovisual Institute, Universitat Pompeu Fabra, Pg. Cirumval. laió, 8 08003 Barelona, Spain www.iua.upf.es/, gvirtual Interation-Driven Virtual

More information

Robust Classification and Tracking of Vehicles in Traffic Video Streams

Robust Classification and Tracking of Vehicles in Traffic Video Streams Proeedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conferene Toronto, Canada, September 17-20, 2006 TC1.4 Robust Classifiation and Traking of Vehiles in Traffi Video Streams

More information

Deadline-based Escalation in Process-Aware Information Systems

Deadline-based Escalation in Process-Aware Information Systems Deadline-based Esalation in Proess-Aware Information Systems Wil M.P. van der Aalst 1,2, Mihael Rosemann 2, Marlon Dumas 2 1 Department of Tehnology Management Eindhoven University of Tehnology, The Netherlands

More information

' R ATIONAL. :::~i:. :'.:::::: RETENTION ':: Compliance with the way you work PRODUCT BRIEF

' R ATIONAL. :::~i:. :'.:::::: RETENTION ':: Compliance with the way you work PRODUCT BRIEF ' R :::i:. ATIONAL :'.:::::: RETENTION ':: Compliane with the way you work, PRODUCT BRIEF In-plae Management of Unstrutured Data The explosion of unstrutured data ombined with new laws and regulations

More information

Findings and Recommendations

Findings and Recommendations Contrating Methods and Administration Findings and Reommendations Finding 9-1 ESD did not utilize a formal written pre-qualifiations proess for seleting experiened design onsultants. ESD hose onsultants

More information

Henley Business School at Univ of Reading. Pre-Experience Postgraduate Programmes Chartered Institute of Personnel and Development (CIPD)

Henley Business School at Univ of Reading. Pre-Experience Postgraduate Programmes Chartered Institute of Personnel and Development (CIPD) MS in International Human Resoure Management For students entering in 2012/3 Awarding Institution: Teahing Institution: Relevant QAA subjet Benhmarking group(s): Faulty: Programme length: Date of speifiation:

More information

Improved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules

Improved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules Improved Vehile Classifiation in Long Traffi Video by Cooperating Traker and Classifier Modules Brendan Morris and Mohan Trivedi University of California, San Diego San Diego, CA 92093 {b1morris, trivedi}@usd.edu

More information

Channel Assignment Strategies for Cellular Phone Systems

Channel Assignment Strategies for Cellular Phone Systems Channel Assignment Strategies for Cellular Phone Systems Wei Liu Yiping Han Hang Yu Zhejiang University Hangzhou, P. R. China Contat: wliu5@ie.uhk.edu.hk 000 Mathematial Contest in Modeling (MCM) Meritorious

More information

An Efficient Network Traffic Classification Based on Unknown and Anomaly Flow Detection Mechanism

An Efficient Network Traffic Classification Based on Unknown and Anomaly Flow Detection Mechanism An Effiient Network Traffi Classifiation Based on Unknown and Anomaly Flow Detetion Mehanism G.Suganya.M.s.,B.Ed 1 1 Mphil.Sholar, Department of Computer Siene, KG College of Arts and Siene,Coimbatore.

More information

FIRE DETECTION USING AUTONOMOUS AERIAL VEHICLES WITH INFRARED AND VISUAL CAMERAS. J. Ramiro Martínez-de Dios, Luis Merino and Aníbal Ollero

FIRE DETECTION USING AUTONOMOUS AERIAL VEHICLES WITH INFRARED AND VISUAL CAMERAS. J. Ramiro Martínez-de Dios, Luis Merino and Aníbal Ollero FE DETECTION USING AUTONOMOUS AERIAL VEHICLES WITH INFRARED AND VISUAL CAMERAS. J. Ramiro Martínez-de Dios, Luis Merino and Aníbal Ollero Robotis, Computer Vision and Intelligent Control Group. University

More information

Open and Extensible Business Process Simulator

Open and Extensible Business Process Simulator UNIVERSITY OF TARTU FACULTY OF MATHEMATICS AND COMPUTER SCIENCE Institute of Computer Siene Karl Blum Open and Extensible Business Proess Simulator Master Thesis (30 EAP) Supervisors: Luiano Garía-Bañuelos,

More information

Intelligent Measurement Processes in 3D Optical Metrology: Producing More Accurate Point Clouds

Intelligent Measurement Processes in 3D Optical Metrology: Producing More Accurate Point Clouds Intelligent Measurement Proesses in 3D Optial Metrology: Produing More Aurate Point Clouds Charles Mony, Ph.D. 1 President Creaform in. mony@reaform3d.om Daniel Brown, Eng. 1 Produt Manager Creaform in.

More information

A Comparison of Service Quality between Private and Public Hospitals in Thailand

A Comparison of Service Quality between Private and Public Hospitals in Thailand International Journal of Business and Soial Siene Vol. 4 No. 11; September 2013 A Comparison of Servie Quality between Private and Hospitals in Thailand Khanhitpol Yousapronpaiboon, D.B.A. Assistant Professor

More information

UNIVERSITY AND WORK-STUDY EMPLOYERS WEB SITE USER S GUIDE

UNIVERSITY AND WORK-STUDY EMPLOYERS WEB SITE USER S GUIDE UNIVERSITY AND WORK-STUDY EMPLOYERS WEB SITE USER S GUIDE September 8, 2009 Table of Contents 1 Home 2 University 3 Your 4 Add 5 Managing 6 How 7 Viewing 8 Closing 9 Reposting Page 1 and Work-Study Employers

More information

Sebastián Bravo López

Sebastián Bravo López Transfinite Turing mahines Sebastián Bravo López 1 Introdution With the rise of omputers with high omputational power the idea of developing more powerful models of omputation has appeared. Suppose that

More information

Context-Sensitive Adjustments of Cognitive Control: Conflict-Adaptation Effects Are Modulated by Processing Demands of the Ongoing Task

Context-Sensitive Adjustments of Cognitive Control: Conflict-Adaptation Effects Are Modulated by Processing Demands of the Ongoing Task Journal of Experimental Psyhology: Learning, Memory, and Cognition 2008, Vol. 34, No. 3, 712 718 Copyright 2008 by the Amerian Psyhologial Assoiation 0278-7393/08/$12.00 DOI: 10.1037/0278-7393.34.3.712

More information

Hierarchical Clustering and Sampling Techniques for Network Monitoring

Hierarchical Clustering and Sampling Techniques for Network Monitoring S. Sindhuja Hierarhial Clustering and Sampling Tehniques for etwork Monitoring S. Sindhuja ME ABSTRACT: etwork monitoring appliations are used to monitor network traffi flows. Clustering tehniques are

More information

Chapter 1 Microeconomics of Consumer Theory

Chapter 1 Microeconomics of Consumer Theory Chapter 1 Miroeonomis of Consumer Theory The two broad ategories of deision-makers in an eonomy are onsumers and firms. Eah individual in eah of these groups makes its deisions in order to ahieve some

More information

Agile ALM White Paper: Redefining ALM with Five Key Practices

Agile ALM White Paper: Redefining ALM with Five Key Practices Agile ALM White Paper: Redefining ALM with Five Key Praties by Ethan Teng, Cyndi Mithell and Chad Wathington 2011 ThoughtWorks ln. All rights reserved www.studios.thoughtworks.om Introdution The pervasiveness

More information

FOOD FOR THOUGHT Topical Insights from our Subject Matter Experts

FOOD FOR THOUGHT Topical Insights from our Subject Matter Experts FOOD FOR THOUGHT Topial Insights from our Sujet Matter Experts DEGREE OF DIFFERENCE TESTING: AN ALTERNATIVE TO TRADITIONAL APPROACHES The NFL White Paper Series Volume 14, June 2014 Overview Differene

More information

Capacity at Unsignalized Two-Stage Priority Intersections

Capacity at Unsignalized Two-Stage Priority Intersections Capaity at Unsignalized Two-Stage Priority Intersetions by Werner Brilon and Ning Wu Abstrat The subjet of this paper is the apaity of minor-street traffi movements aross major divided four-lane roadways

More information

RATING SCALES FOR NEUROLOGISTS

RATING SCALES FOR NEUROLOGISTS RATING SCALES FOR NEUROLOGISTS J Hobart iv22 WHY Correspondene to: Dr Jeremy Hobart, Department of Clinial Neurosienes, Peninsula Medial Shool, Derriford Hospital, Plymouth PL6 8DH, UK; Jeremy.Hobart@

More information

WORKFLOW CONTROL-FLOW PATTERNS A Revised View

WORKFLOW CONTROL-FLOW PATTERNS A Revised View WORKFLOW CONTROL-FLOW PATTERNS A Revised View Nik Russell 1, Arthur H.M. ter Hofstede 1, 1 BPM Group, Queensland University of Tehnology GPO Box 2434, Brisbane QLD 4001, Australia {n.russell,a.terhofstede}@qut.edu.au

More information

Henley Business School at Univ of Reading. Chartered Institute of Personnel and Development (CIPD)

Henley Business School at Univ of Reading. Chartered Institute of Personnel and Development (CIPD) MS in International Human Resoure Management (full-time) For students entering in 2015/6 Awarding Institution: Teahing Institution: Relevant QAA subjet Benhmarking group(s): Faulty: Programme length: Date

More information

Customer Efficiency, Channel Usage and Firm Performance in Retail Banking

Customer Efficiency, Channel Usage and Firm Performance in Retail Banking Customer Effiieny, Channel Usage and Firm Performane in Retail Banking Mei Xue Operations and Strategi Management Department The Wallae E. Carroll Shool of Management Boston College 350 Fulton Hall, 140

More information

Chapter 5 Single Phase Systems

Chapter 5 Single Phase Systems Chapter 5 Single Phase Systems Chemial engineering alulations rely heavily on the availability of physial properties of materials. There are three ommon methods used to find these properties. These inlude

More information

An exploration of student failure on an undergraduate accounting programme of study

An exploration of student failure on an undergraduate accounting programme of study Aounting Eduation 11 (1), 93 107 (2002) An exploration of student failure on an undergraduate aounting programme of study LOUISE GRACIA* and ELLIS JENKINS University of Glamorgan, S Wales, UK Reeived:

More information

An Enhanced Critical Path Method for Multiple Resource Constraints

An Enhanced Critical Path Method for Multiple Resource Constraints An Enhaned Critial Path Method for Multiple Resoure Constraints Chang-Pin Lin, Hung-Lin Tai, and Shih-Yan Hu Abstrat Traditional Critial Path Method onsiders only logial dependenies between related ativities

More information

Big Data Analysis and Reporting with Decision Tree Induction

Big Data Analysis and Reporting with Decision Tree Induction Big Data Analysis and Reporting with Deision Tree Indution PETRA PERNER Institute of Computer Vision and Applied Computer Sienes, IBaI Postbox 30 11 14, 04251 Leipzig GERMANY pperner@ibai-institut.de,

More information

TECHNOLOGY-ENHANCED LEARNING FOR MUSIC WITH I-MAESTRO FRAMEWORK AND TOOLS

TECHNOLOGY-ENHANCED LEARNING FOR MUSIC WITH I-MAESTRO FRAMEWORK AND TOOLS TECHNOLOGY-ENHANCED LEARNING FOR MUSIC WITH I-MAESTRO FRAMEWORK AND TOOLS ICSRiM - University of Leeds Shool of Computing & Shool of Musi Leeds LS2 9JT, UK +44-113-343-2583 kia@i-maestro.org www.i-maestro.org,

More information

A Keyword Filters Method for Spam via Maximum Independent Sets

A Keyword Filters Method for Spam via Maximum Independent Sets Vol. 7, No. 3, May, 213 A Keyword Filters Method for Spam via Maximum Independent Sets HaiLong Wang 1, FanJun Meng 1, HaiPeng Jia 2, JinHong Cheng 3 and Jiong Xie 3 1 Inner Mongolia Normal University 2

More information

A novel active mass damper for vibration control of bridges

A novel active mass damper for vibration control of bridges IABMAS 08, International Conferene on Bridge Maintenane, Safety and Management, 3-7 July 008, Seoul, Korea A novel ative mass damper for vibration ontrol of bridges U. Starossek & J. Sheller Strutural

More information

Masters Thesis- Criticality Alarm System Design Guide with Accompanying Alarm System Development for the Radioisotope Production L

Masters Thesis- Criticality Alarm System Design Guide with Accompanying Alarm System Development for the Radioisotope Production L PNNL-18348 Prepared for the U.S. Department of Energy under Contrat DE-AC05-76RL01830 Masters Thesis- Critiality Alarm System Design Guide with Aompanying Alarm System Development for the Radioisotope

More information

Neural network-based Load Balancing and Reactive Power Control by Static VAR Compensator

Neural network-based Load Balancing and Reactive Power Control by Static VAR Compensator nternational Journal of Computer and Eletrial Engineering, Vol. 1, No. 1, April 2009 Neural network-based Load Balaning and Reative Power Control by Stati VAR Compensator smail K. Said and Marouf Pirouti

More information

Improved SOM-Based High-Dimensional Data Visualization Algorithm

Improved SOM-Based High-Dimensional Data Visualization Algorithm Computer and Information Siene; Vol. 5, No. 4; 2012 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Siene and Eduation Improved SOM-Based High-Dimensional Data Visualization Algorithm Wang

More information

Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System

Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System Marker Traking and HMD Calibration for a Video-based Augmented Reality Conferening System Hirokazu Kato 1 and Mark Billinghurst 2 1 Faulty of Information Sienes, Hiroshima City University 2 Human Interfae

More information

Software Ecosystems: From Software Product Management to Software Platform Management

Software Ecosystems: From Software Product Management to Software Platform Management Software Eosystems: From Software Produt Management to Software Platform Management Slinger Jansen, Stef Peeters, and Sjaak Brinkkemper Department of Information and Computing Sienes Utreht University,

More information

TRENDS IN EXECUTIVE EDUCATION: TOWARDS A SYSTEMS APPROACH TO EXECUTIVE DEVELOPMENT PLANNING

TRENDS IN EXECUTIVE EDUCATION: TOWARDS A SYSTEMS APPROACH TO EXECUTIVE DEVELOPMENT PLANNING INTERMAN 7 TRENDS IN EXECUTIVE EDUCATION: TOWARDS A SYSTEMS APPROACH TO EXECUTIVE DEVELOPMENT PLANNING by Douglas A. Ready, Albert A. Viere and Alan F. White RECEIVED 2 7 MAY 1393 International Labour

More information

How To Fator

How To Fator CHAPTER hapter 4 > Make the Connetion 4 INTRODUCTION Developing seret odes is big business beause of the widespread use of omputers and the Internet. Corporations all over the world sell enryption systems

More information

SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments

SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments 2 th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing SLA-based Resoure Alloation for Software as a Servie Provider (SaaS) in Cloud Computing Environments Linlin Wu, Saurabh Kumar

More information

An integrated optimization model of a Closed- Loop Supply Chain under uncertainty

An integrated optimization model of a Closed- Loop Supply Chain under uncertainty ISSN 1816-6075 (Print), 1818-0523 (Online) Journal of System and Management Sienes Vol. 2 (2012) No. 3, pp. 9-17 An integrated optimization model of a Closed- Loop Supply Chain under unertainty Xiaoxia

More information

Picture This: Molecular Maya Puts Life in Life Science Animations

Picture This: Molecular Maya Puts Life in Life Science Animations Piture This: Moleular Maya Puts Life in Life Siene Animations [ Data Visualization ] Based on the Autodesk platform, Digizyme plug-in proves aestheti and eduational effetiveness. BY KEVIN DAVIES In 2010,

More information

A Three-Hybrid Treatment Method of the Compressor's Characteristic Line in Performance Prediction of Power Systems

A Three-Hybrid Treatment Method of the Compressor's Characteristic Line in Performance Prediction of Power Systems A Three-Hybrid Treatment Method of the Compressor's Charateristi Line in Performane Predition of Power Systems A Three-Hybrid Treatment Method of the Compressor's Charateristi Line in Performane Predition

More information

SOFTWARE ENGINEERING I

SOFTWARE ENGINEERING I SOFTWARE ENGINEERING I CS 10 Catalog Desription PREREQUISITE: CS 21. Introdution to the systems development life yle, software development models, analysis and design tehniques and tools, and validation

More information

Behavior Analysis-Based Learning Framework for Host Level Intrusion Detection

Behavior Analysis-Based Learning Framework for Host Level Intrusion Detection Behavior Analysis-Based Learning Framework for Host Level Intrusion Detetion Haiyan Qiao, Jianfeng Peng, Chuan Feng, Jerzy W. Rozenblit Eletrial and Computer Engineering Department University of Arizona

More information

Granular Problem Solving and Software Engineering

Granular Problem Solving and Software Engineering Granular Problem Solving and Software Engineering Haibin Zhu, Senior Member, IEEE Department of Computer Siene and Mathematis, Nipissing University, 100 College Drive, North Bay, Ontario, P1B 8L7, Canada

More information

GABOR AND WEBER LOCAL DESCRIPTORS PERFORMANCE IN MULTISPECTRAL EARTH OBSERVATION IMAGE DATA ANALYSIS

GABOR AND WEBER LOCAL DESCRIPTORS PERFORMANCE IN MULTISPECTRAL EARTH OBSERVATION IMAGE DATA ANALYSIS HENRI COANDA AIR FORCE ACADEMY ROMANIA INTERNATIONAL CONFERENCE of SCIENTIFIC PAPER AFASES 015 Brasov, 8-30 May 015 GENERAL M.R. STEFANIK ARMED FORCES ACADEMY SLOVAK REPUBLIC GABOR AND WEBER LOCAL DESCRIPTORS

More information

arxiv:astro-ph/0304006v2 10 Jun 2003 Theory Group, MS 50A-5101 Lawrence Berkeley National Laboratory One Cyclotron Road Berkeley, CA 94720 USA

arxiv:astro-ph/0304006v2 10 Jun 2003 Theory Group, MS 50A-5101 Lawrence Berkeley National Laboratory One Cyclotron Road Berkeley, CA 94720 USA LBNL-52402 Marh 2003 On the Speed of Gravity and the v/ Corretions to the Shapiro Time Delay Stuart Samuel 1 arxiv:astro-ph/0304006v2 10 Jun 2003 Theory Group, MS 50A-5101 Lawrene Berkeley National Laboratory

More information

protection p1ann1ng report

protection p1ann1ng report f1re~~ protetion p1ann1ng report BUILDING CONSTRUCTION INFORMATION FROM THE CONCRETE AND MASONRY INDUSTRIES Signifiane of Fire Ratings for Building Constrution NO. 3 OF A SERIES The use of fire-resistive

More information

Chapter 6 A N ovel Solution Of Linear Congruenes Proeedings NCUR IX. (1995), Vol. II, pp. 708{712 Jerey F. Gold Department of Mathematis, Department of Physis University of Utah Salt Lake City, Utah 84112

More information

OpenScape 4000 CSTA V7 Connectivity Adapter - CSTA III, Part 2, Version 4.1. Developer s Guide A31003-G9310-I200-1-76D1

OpenScape 4000 CSTA V7 Connectivity Adapter - CSTA III, Part 2, Version 4.1. Developer s Guide A31003-G9310-I200-1-76D1 OpenSape 4000 CSTA V7 Connetivity Adapter - CSTA III, Part 2, Version 4.1 Developer s Guide A31003-G9310-I200-1-76 Our Quality and Environmental Management Systems are implemented aording to the requirements

More information

) ( )( ) ( ) ( )( ) ( ) ( ) (1)

) ( )( ) ( ) ( )( ) ( ) ( ) (1) OPEN CHANNEL FLOW Open hannel flow is haraterized by a surfae in ontat with a gas phase, allowing the fluid to take on shapes and undergo behavior that is impossible in a pipe or other filled onduit. Examples

More information

i_~f e 1 then e 2 else e 3

i_~f e 1 then e 2 else e 3 A PROCEDURE MECHANISM FOR BACKTRACK PROGRAMMING* David R. HANSON + Department o Computer Siene, The University of Arizona Tuson, Arizona 85721 One of the diffiulties in using nondeterministi algorithms

More information

AUDITING COST OVERRUN CLAIMS *

AUDITING COST OVERRUN CLAIMS * AUDITING COST OVERRUN CLAIMS * David Pérez-Castrillo # University of Copenhagen & Universitat Autònoma de Barelona Niolas Riedinger ENSAE, Paris Abstrat: We onsider a ost-reimbursement or a ost-sharing

More information

Using Live Chat in your Call Centre

Using Live Chat in your Call Centre Using Live Chat in your Call Centre Otober Key Highlights Yesterday's all entres have beome today's ontat entres where agents deal with multiple queries from multiple hannels. Live Chat hat is one now

More information

Board Building Recruiting and Developing Effective Board Members for Not-for-Profit Organizations

Board Building Recruiting and Developing Effective Board Members for Not-for-Profit Organizations Board Development Board Building Reruiting and Developing Effetive Board Members for Not-for-Profit Organizations Board Development Board Building Reruiting and Developing Effetive Board Members for Not-for-Profit

More information

RISK-BASED IN SITU BIOREMEDIATION DESIGN JENNINGS BRYAN SMALLEY. A.B., Washington University, 1992 THESIS. Urbana, Illinois

RISK-BASED IN SITU BIOREMEDIATION DESIGN JENNINGS BRYAN SMALLEY. A.B., Washington University, 1992 THESIS. Urbana, Illinois RISK-BASED IN SITU BIOREMEDIATION DESIGN BY JENNINGS BRYAN SMALLEY A.B., Washington University, 1992 THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Siene in Environmental

More information

A Context-Aware Preference Database System

A Context-Aware Preference Database System J. PERVASIVE COMPUT. & COMM. (), MARCH 005. TROUBADOR PUBLISHING LTD) A Context-Aware Preferene Database System Kostas Stefanidis Department of Computer Siene, University of Ioannina,, kstef@s.uoi.gr Evaggelia

More information

The Basics of International Trade: A Classroom Experiment

The Basics of International Trade: A Classroom Experiment The Basis of International Trade: A Classroom Experiment Alberto Isgut, Ganesan Ravishanker, and Tanya Rosenblat * Wesleyan University Abstrat We introdue a simple web-based lassroom experiment in whih

More information

Weighting Methods in Survey Sampling

Weighting Methods in Survey Sampling Setion on Survey Researh Methods JSM 01 Weighting Methods in Survey Sampling Chiao-hih Chang Ferry Butar Butar Abstrat It is said that a well-designed survey an best prevent nonresponse. However, no matter

More information

Procurement auctions are sometimes plagued with a chosen supplier s failing to accomplish a project successfully.

Procurement auctions are sometimes plagued with a chosen supplier s failing to accomplish a project successfully. Deision Analysis Vol. 7, No. 1, Marh 2010, pp. 23 39 issn 1545-8490 eissn 1545-8504 10 0701 0023 informs doi 10.1287/dea.1090.0155 2010 INFORMS Managing Projet Failure Risk Through Contingent Contrats

More information

State of Maryland Participation Agreement for Pre-Tax and Roth Retirement Savings Accounts

State of Maryland Participation Agreement for Pre-Tax and Roth Retirement Savings Accounts State of Maryland Partiipation Agreement for Pre-Tax and Roth Retirement Savings Aounts DC-4531 (08/2015) For help, please all 1-800-966-6355 www.marylandd.om 1 Things to Remember Complete all of the setions

More information

Recovering Articulated Motion with a Hierarchical Factorization Method

Recovering Articulated Motion with a Hierarchical Factorization Method Reovering Artiulated Motion with a Hierarhial Fatorization Method Hanning Zhou and Thomas S Huang University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 680, USA {hzhou, huang}@ifpuiuedu

More information

Impact Simulation of Extreme Wind Generated Missiles on Radioactive Waste Storage Facilities

Impact Simulation of Extreme Wind Generated Missiles on Radioactive Waste Storage Facilities Impat Simulation of Extreme Wind Generated issiles on Radioative Waste Storage Failities G. Barbella Sogin S.p.A. Via Torino 6 00184 Rome (Italy), barbella@sogin.it Abstrat: The strutural design of temporary

More information

Parametric model of IP-networks in the form of colored Petri net

Parametric model of IP-networks in the form of colored Petri net Parametri model of IP-networks in the form of olored Petri net Shmeleva T.R. Abstrat A parametri model of IP-networks in the form of olored Petri net was developed; it onsists of a fixed number of Petri

More information

Classical Electromagnetic Doppler Effect Redefined. Copyright 2014 Joseph A. Rybczyk

Classical Electromagnetic Doppler Effect Redefined. Copyright 2014 Joseph A. Rybczyk Classial Eletromagneti Doppler Effet Redefined Copyright 04 Joseph A. Rybzyk Abstrat The lassial Doppler Effet formula for eletromagneti waves is redefined to agree with the fundamental sientifi priniples

More information

Supply chain coordination; A Game Theory approach

Supply chain coordination; A Game Theory approach aepted for publiation in the journal "Engineering Appliations of Artifiial Intelligene" 2008 upply hain oordination; A Game Theory approah Jean-Claude Hennet x and Yasemin Arda xx x LI CNR-UMR 668 Université

More information

VOLUME 13, ARTICLE 5, PAGES 117-142 PUBLISHED 05 OCTOBER 2005 DOI: 10.4054/DemRes.2005.13.

VOLUME 13, ARTICLE 5, PAGES 117-142 PUBLISHED 05 OCTOBER 2005  DOI: 10.4054/DemRes.2005.13. Demographi Researh a free, expedited, online journal of peer-reviewed researh and ommentary in the population sienes published by the Max Plank Institute for Demographi Researh Konrad-Zuse Str. 1, D-157

More information

INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS

INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS Virginia Department of Taxation INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS www.tax.virginia.gov 2614086 Rev. 01/16 Table of Contents Introdution... 1 Important... 1 Where to Get Assistane... 1 Online File

More information

INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS

INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS Virginia Department of Taxation INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS www.tax.virginia.gov 2614086 Rev. 07/14 * Table of Contents Introdution... 1 Important... 1 Where to Get Assistane... 1 Online

More information

Paid Placement Strategies for Internet Search Engines

Paid Placement Strategies for Internet Search Engines Paid Plaement Strategies for Internet Searh Engines Hemant K. Bhargava Smeal College of Business Penn State University 342 Beam Building University Park, PA 16802 bhargava@omputer.org Juan Feng Smeal College

More information

THE UNIVERSITY OF TEXAS AT ARLINGTON COLLEGE OF NURSING. NURS 6390-004 Introduction to Genetics and Genomics SYLLABUS

THE UNIVERSITY OF TEXAS AT ARLINGTON COLLEGE OF NURSING. NURS 6390-004 Introduction to Genetics and Genomics SYLLABUS THE UNIVERSITY OF TEXAS AT ARLINGTON COLLEGE OF NURSING NURS 6390-004 Introdution to Genetis and Genomis SYLLABUS Summer Interession 2011 Classroom #: TBA and 119 (lab) The University of Texas at Arlington

More information

Entrepreneur s Guide. Starting and Growing a Business in Pennsylvania FEBRUARY 2015. newpa.com

Entrepreneur s Guide. Starting and Growing a Business in Pennsylvania FEBRUARY 2015. newpa.com Entrepreneur s Guide Starting and Growing a Business in Pennsylvania FEBRUARY 2015 newpa.om The Entrepreneur s Guide: Starting and Growing a Business in Pennsylvania was prepared by the Pennsylvania Department

More information

Availability, Reliability, Maintainability, and Capability

Availability, Reliability, Maintainability, and Capability Availability, Reliability, Maintainability, and Capability H. Paul Barringer, P.E. Barringer & Assoiates, In. Humble, TX Triplex Chapter Of The Vibrations Institute Hilton Hotel Beaumont, Texas February

More information

A Design Environment for Migrating Relational to Object Oriented Database Systems

A Design Environment for Migrating Relational to Object Oriented Database Systems To appear in: 1996 International Conferene on Software Maintenane (ICSM 96); IEEE Computer Soiety, 1996 A Design Environment for Migrating Relational to Objet Oriented Database Systems Jens Jahnke, Wilhelm

More information

Impedance Method for Leak Detection in Zigzag Pipelines

Impedance Method for Leak Detection in Zigzag Pipelines 10.478/v10048-010-0036-0 MEASUREMENT SCIENCE REVIEW, Volume 10, No. 6, 010 Impedane Method for Leak Detetion in igzag Pipelines A. Lay-Ekuakille 1, P. Vergallo 1, A. Trotta 1 Dipartimento d Ingegneria

More information

Design Implications for Enterprise Storage Systems via Multi-Dimensional Trace Analysis

Design Implications for Enterprise Storage Systems via Multi-Dimensional Trace Analysis Design Impliations for Enterprise Storage Systems via Multi-Dimensional Trae Analysis Yanpei Chen, Kiran Srinivasan, Garth Goodson, Randy Katz University of California, Berkeley, NetApp In. {yhen2, randy}@ees.berkeley.edu,

More information

Discovering Trends in Large Datasets Using Neural Networks

Discovering Trends in Large Datasets Using Neural Networks Disovering Trends in Large Datasets Using Neural Networks Khosrow Kaikhah, Ph.D. and Sandesh Doddameti Department of Computer Siene Texas State University San Maros, Texas 78666 Abstrat. A novel knowledge

More information

protection p1ann1ng report

protection p1ann1ng report ( f1re protetion p1ann1ng report I BUILDING CONSTRUCTION INFORMATION FROM THE CONCRETE AND MASONRY INDUSTRIES NO. 15 OF A SERIES A Comparison of Insurane and Constrution Costs for Low-Rise Multifamily

More information

Computer Networks Framing

Computer Networks Framing Computer Networks Framing Saad Mneimneh Computer Siene Hunter College of CUNY New York Introdution Who framed Roger rabbit? A detetive, a woman, and a rabbit in a network of trouble We will skip the physial

More information

Table of Contents. Appendix II Application Checklist. Export Finance Program Working Capital Financing...7

Table of Contents. Appendix II Application Checklist. Export Finance Program Working Capital Financing...7 Export Finane Program Guidelines Table of Contents Setion I General...........................................................1 A. Introdution............................................................1

More information

Electrician'sMathand BasicElectricalFormulas

Electrician'sMathand BasicElectricalFormulas Eletriian'sMathand BasiEletrialFormulas MikeHoltEnterprises,In. 1.888.NEC.CODE www.mikeholt.om Introdution Introdution This PDF is a free resoure from Mike Holt Enterprises, In. It s Unit 1 from the Eletrial

More information

Interactive Feature Specification for Focus+Context Visualization of Complex Simulation Data

Interactive Feature Specification for Focus+Context Visualization of Complex Simulation Data Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization (2003) G.-P. Bonneau, S. Hahmann, C. D. Hansen (Editors) Interative Feature Speifiation for Fous+Context Visualization of Complex Simulation Data

More information

International Journal of Supply and Operations Management. Mathematical modeling for EOQ inventory system with advance payment and fuzzy Parameters

International Journal of Supply and Operations Management. Mathematical modeling for EOQ inventory system with advance payment and fuzzy Parameters nternational Journal of Supply and Operations Management JSOM November 0, Volume, ssue 3, pp. 60-78 SSN-Print: 383-359 SSN-Online: 383-55 www.ijsom.om Mathematial modeling for EOQ inventory system with

More information

Ranking Community Answers by Modeling Question-Answer Relationships via Analogical Reasoning

Ranking Community Answers by Modeling Question-Answer Relationships via Analogical Reasoning Ranking Community Answers by Modeling Question-Answer Relationships via Analogial Reasoning Xin-Jing Wang Mirosoft Researh Asia 4F Sigma, 49 Zhihun Road Beijing, P.R.China xjwang@mirosoft.om Xudong Tu,Dan

More information

From a strategic view to an engineering view in a digital enterprise

From a strategic view to an engineering view in a digital enterprise Digital Enterprise Design & Management 2013 February 11-12, 2013 Paris From a strategi view to an engineering view in a digital enterprise The ase of a multi-ountry Telo Hervé Paault Orange Abstrat In

More information

A Reputation Management Approach for Resource Constrained Trustee Agents

A Reputation Management Approach for Resource Constrained Trustee Agents A Reputation Management Approah for Resoure Constrained rustee Agents Han Yu, Chunyan Miao, Bo An 2, Cyril Leung 3, and Vitor R. Lesser 4 Nanyang ehnologial University, Singapore 2 he Key Lab of Intelligent

More information

MEMBER. Application for election MEMBER, NEW GRADUATE. psychology.org.au. April 2015

MEMBER. Application for election MEMBER, NEW GRADUATE. psychology.org.au. April 2015 MEMBER Appliation for eletion MEMBER, NEW GRADUATE April 2015 psyhology.org.au MEMBER Belonging to the Australian Psyhologial Soiety (APS) means you are part of an ative, progressive organisation whih

More information

Pattern Recognition Techniques in Microarray Data Analysis

Pattern Recognition Techniques in Microarray Data Analysis Pattern Reognition Tehniques in Miroarray Data Analysis Miao Li, Biao Wang, Zohreh Momeni, and Faramarz Valafar Department of Computer Siene San Diego State University San Diego, California, USA faramarz@sienes.sdsu.edu

More information

Srinivas Bollapragada GE Global Research Center. Abstract

Srinivas Bollapragada GE Global Research Center. Abstract Sheduling Commerial Videotapes in Broadast Television Srinivas Bollapragada GE Global Researh Center Mihael Bussiek GAMS Development Corporation Suman Mallik University of Illinois at Urbana Champaign

More information

i e AT 21 of 2006 EMPLOYMENT ACT 2006

i e AT 21 of 2006 EMPLOYMENT ACT 2006 i e AT 21 of 2006 EMPLOYMENT ACT 2006 Employment At 2006 Index i e EMPLOYMENT ACT 2006 Index Setion Page PART I DISCRIMINATION AT RECRUITMENT ON TRADE UNION GROUNDS 9 1 Refusal of employment on grounds

More information

AUTOMATED VISUAL TRAFFIC MONITORING AND SURVEILLANCE THROUGH A NETWORK OF DISTRIBUTED UNITS

AUTOMATED VISUAL TRAFFIC MONITORING AND SURVEILLANCE THROUGH A NETWORK OF DISTRIBUTED UNITS AUTOMATED VISUAL TRAFFIC MOITORIG AD SURVEILLACE THROUGH A ETWORK OF DISTRIBUTED UITS A. Koutsia a, T. Semertzidis a, K. Dimitropoulos a,. Grammalidis a and K. Georgouleas b a Informatis and Telematis

More information

Trade Information, Not Spectrum: A Novel TV White Space Information Market Model

Trade Information, Not Spectrum: A Novel TV White Space Information Market Model Trade Information, Not Spetrum: A Novel TV White Spae Information Market Model Yuan Luo, Lin Gao, and Jianwei Huang 1 Abstrat In this paper, we propose a novel information market for TV white spae networks,

More information

A Comparison of Default and Reduced Bandwidth MR Imaging of the Spine at 1.5 T

A Comparison of Default and Reduced Bandwidth MR Imaging of the Spine at 1.5 T 9 A Comparison of efault and Redued Bandwidth MR Imaging of the Spine at 1.5 T L. Ketonen 1 S. Totterman 1 J. H. Simon 1 T. H. Foster 2. K. Kido 1 J. Szumowski 1 S. E. Joy1 The value of a redued bandwidth

More information

On the Characteristics of Spectrum-Agile Communication Networks

On the Characteristics of Spectrum-Agile Communication Networks 1 On the Charateristis of Spetrum-Agile Communiation Networks Xin Liu Wei Wang Department of Computer Siene University of California, Davis, CA 95616 Email:{liu,wangw}@s.udavis.edu Abstrat Preliminary

More information

Unit 12: Installing, Configuring and Administering Microsoft Server

Unit 12: Installing, Configuring and Administering Microsoft Server Unit 12: Installing, Configuring and Administering Mirosoft Server Learning Outomes A andidate following a programme of learning leading to this unit will be able to: Selet a suitable NOS to install for

More information

THE PERFORMANCE OF TRANSIT TIME FLOWMETERS IN HEATED GAS MIXTURES

THE PERFORMANCE OF TRANSIT TIME FLOWMETERS IN HEATED GAS MIXTURES Proeedings of FEDSM 98 998 ASME Fluids Engineering Division Summer Meeting June 2-25, 998 Washington DC FEDSM98-529 THE PERFORMANCE OF TRANSIT TIME FLOWMETERS IN HEATED GAS MIXTURES John D. Wright Proess

More information

Price-based versus quantity-based approaches for stimulating the development of renewable electricity: new insights in an old debate

Price-based versus quantity-based approaches for stimulating the development of renewable electricity: new insights in an old debate Prie-based versus -based approahes for stimulating the development of renewable eletriity: new insights in an old debate uthors: Dominique FINON, Philippe MENNTEU, Marie-Laure LMY, Institut d Eonomie et

More information