Large-Scale Collection of Usage Data to Inform Design

Size: px
Start display at page:

Download "Large-Scale Collection of Usage Data to Inform Design"

Transcription

1 Large-Scale Collection of Usage Data to Inform Design David M. Hilbert & David F. Redmiles 2 FX Palo Alto Laboratory, 3400 Hillview Ave., Bldg. 4, Palo Alto, CA USA 2 Information and Computer Science, University of California, Irvine, CA 9277 USA hilbert@pal.xerox.com & 2 redmiles@ics.uci.edu Abstract: The two most commonly used techniques for evaluating the fit between application design and use namely, usability testing and beta testing with user feedback suffer from a number of limitations that restrict evaluation scale (in the case of usability tests) and data quality (in the case of beta tests). They also fail to provide developers with an adequate basis for: () assessing the impact of suspected problems on users at large, and (2) deciding where to focus development and evaluation resources to maximize the benefit for users at large. This paper describes an agent-based approach for collecting usage data and user feedback over the Internet that addresses these limitations to provide developers with a complementary source of usage- and usability-related information. Contributions include: a theory to motivate and guide data collection, an architecture capable of supporting very large scale data collection, and real-word experience suggesting the proposed approach is complementary to existing practice. Keywords: Usability testing, beta testing, automated data collection, software monitoring, post-deployment evaluation Introduction Involving end users in the development of interactive systems increases the likelihood those systems will be useful and usable. The Internet presents hitherto unprecedented, and currently underutilized, opportunities for increasing user involvement by: () enabling cheap, rapid, and large-scale distribution of software for evaluation purposes and (2) providing convenient mechanisms for communicating application usage date and user feedback to interested development organizations. Unfortunately, a major challenge facing development organizations today is that there are already more suspected problems, proposed solutions, and novel design ideas emanating from various stakeholders than there are resources available to address these issues (Cusumano & Selby, 995). As a result, developers are often more concerned with addressing the following problems, than in generating more ideas about how to improve designs: Impact assessment: To what extent do suspected or observed problems actually impact users at large? What is the expected impact of implementing proposed solutions or novel ideas on users at large? Effort allocation: Where should scarce design, implementation, testing, and usability evaluation resources be focused in order to produce the greatest benefit for users at large? Current usability and beta testing practices do not adequately address these questions. This paper describes an approach to usage data and user feedback collection that complements existing practice by helping developers address these questions more directly. 2 s 2. Usability Testing Scale is the critical limiting factor in usability tests. Usability tests are typically restricted in terms of size, scope, location, and duration: Size, because data collection and analysis limitations result in evaluation effort being linked to the number of evaluation subjects. Scope, because typically only a small fraction of an application s functionality can be exercised in any given evaluation. Location, because users are typically displaced from their normal work environments to more controlled laboratory conditions. Duration, because users cannot devote extended periods of time to evaluation activities that take them away from their day-to-day responsibilities.

2 Perhaps more significantly, however, once problems have been identified in the lab, the impact assessment and effort allocation problems remain: What is the actual impact of identified problems on users at large? How should development resources be allocated to fix those problems? Furthermore, because usability testing is itself expensive in terms of user and evaluator effort: How should scarce usability resources be focused to produce the greatest benefit for users at large? 2.2 Beta Testing Data quality is the critical limiting factor in beta tests. When beta testers report usability issues in addition to bugs, data quality is limited in a number of ways. Incentives are a problem since users are typically more concerned with getting their work done than in paying the price of problem reporting while developers receive most of the benefit. As a result, often only the most obvious or unrecoverable errors are reported. Perhaps more significantly, there is often a paradoxical relationship between users performance with respect to a particular application and their subjective ratings of its usability. Numerous usability professionals have observed this phenomenon. Users who perform well in usability tests often volunteer comments in which they report problems with the interface although the problems did not affect their ability to complete tasks. When asked for a justification, these users say things like: Well, it was easy for me, but I think other people would have been confused. On the other hand, users who have difficulties using a particular interface often do not volunteer comments, and if pressed, report that the interface is well designed and easy to use. When confronted with the discrepancy, these users say things like: Someone with more experience would probably have had a much easier time, or I always have more trouble than average with this sort of thing. As a result, potentially important feedback from beta testers having difficulties may fail to be reported while unfounded and potentially misleading feedback from beta testers not having difficulties may be reported. Nevertheless, beta tests do appear to offer good opportunities for collecting usability-related information. Smilowitz and colleagues showed that beta testers who were asked to record usability problems as they arose in normal use identified almost the same number of significant usability problems as identified in laboratory tests of the same software (Smilowitz et al., 994). A later case study performed by Hartson and These examples were taken from a discussion group for usability researchers and professionals involved in usability evaluations. associates, using a remote data collection technique, also appeared to support these results (Hartson et al., 996). However, while the number of usability problems identified in the lab test and beta test conditions was roughly equal, the number of common problems identified by both was rather small. Smilowitz and colleagues offered the following as one possible explanation: In the lab test two observers with experience with the software identified and recorded the problems. In some cases, the users were not aware they were incorrectly using the tool or understanding how the tool worked. If the same is true of the beta testers, some severe problems may have been missed because the testers were not aware they were encountering a problem (Smilowitz et al., 994). Thus, users are limited in their ability to identify and report problems due to a lack of knowledge regarding expected use. Another limitation identified by Smilowitz and colleagues is that the feedback reported in the beta test condition lacked details regarding the interactions leading up to problems and the frequency of problem occurrences. Without this information (or information regarding the frequency with which features associated with reported problems are used) it is difficult to assess the impact of reported problems, and therefore, to decide how to allocate resources to fix those problems. 3 Related Work 3. Early Attempts to Exploit the Internet Some researchers have investigated the use of Internetbased video conferencing and remote application sharing technologies, such as Microsoft NetMeeting TM, to support remote usability evaluations (Castillo & Hartson, 997). Unfortunately, while leveraging the Internet to overcome geographical barriers, these techniques do not exploit the enormous potential afforded by the Internet to lift current restrictions on evaluation size, scope, and duration. This is because a large amount of data is generated per user, and because observers are typically required to observe and interact with users on a one-on-one basis. Others have investigated Internet-based userreporting of critical incidents to capture user feedback and limited usage information (Hartson et al., 996). In this approach, users are trained to identify critical incidents themselves and to press a report button that sends video data surrounding user-identified incidents back to experimenters. While addressing, to a limited degree, the lack of detail in beta tester-reported data, this approach still suffers from the other problems associated with beta tester-reported feedback, including lack of

3 proper incentives, the subjective feedback paradox, and lack of knowledge regarding expected use. 3.2 Automated Techniques An alternative approach involves automatically capturing information about user and application behavior by monitoring the software components that make up the application and its user interface. This data can then be automatically transported to evaluators to identify potential problems. A number of instrumentation and monitoring techniques have been proposed for this purpose (Figure ). However, as argued in more detail in (Hilbert & Redmiles, 2000), existing approaches all suffer from some combination of the following problems, limiting evaluation scalability and data quality: The abstraction problem: Questions about usage typically occur in terms of concepts at higher levels of abstraction than represented in software component event data. This implies the need for data abstraction mechanisms to relate low-level data to higher-level concepts such as user interface and application features as well as users tasks and goals. The selection problem: The data required to answer usage questions is typically a small subset of the much larger set of data that might be captured. Failure to properly select data increases the amount of data that must be reported and decreases the likelihood that automated analysis techniques will identify events and patterns of interest in the noise. This implies the need for data selection mechanisms to separate necessary from unnecessary data prior to reporting and analysis. The reduction problem: Much of the analysis needed to answer usage questions can actually be performed during data collection. Performing reduction during capture not only decreases the amount of data that must be reported, but also increases the likelihood that all the data necessary for analysis is actually captured. This implies the need for data reduction mechanisms to reduce data prior to reporting and analysis. The context problem: Potentially critical information necessary in interpreting the significance of events is often not available in event data alone. However, such information may be available for the asking from the user interface, application, artifacts, or user. This implies the need for context-capture mechanisms to allow state data to be used in abstraction, selection, and reduction. The evolution problem: Finally, data collection needs evolve over time independently from applications. Unnecessary coupling of data collection and application code increases the cost of evolution and impact on users. This implies the need for independently evolvable data collection mechanisms that can be modified over time without impacting application deployment or use. Abstraction Selection Reduction Context Evolution Technique Chen 990 X x X Badre & Santos 99 x X X Weiler 993 x Hoiem & Sullivan 994 x Badre et al. 995 x X X Cook et al. 995; Kay & Thomas 995 instr instr instr instr ErgoLight 998 X Lecerof & Paterno 998 X Figure : Existing data collection approaches and their support for identified problems. A small x indicates limited support while a large X indicates more extensive support. instr indicates that the problem can be addressed, but only by modifying hard-coded instrumentation embedded in application code Figure summarizes the extent to which existing approaches address these problems. 4 Approach We propose a novel approach to large-scale usage data and user feedback collection that addresses these problems. Next we present the theory behind this work. 4. Theory of Expectations When developing systems, developers rely on a number of expectations about how those systems will be used. We call these usage expectations (Girgensohn et al., 994). Developers expectations are based on their knowledge of requirements, the specific tasks and work environments of users, the application domain, and past experience in developing and using applications themselves. Some expectations are explicitly represented, for example, those specified in requirements and in use cases. Others are implicit, including assumptions about usage that are encoded in user interface layout and application structure. For instance, implicit in the layout of most data entry forms is the expectation that users will complete them from top to bottom with only minor variation. In laying out menus and toolbars, it is usually expected that features placed on the toolbar will be more frequently used than those deeply nested in menus. Such expectations are typically not represented explicitly and, as a result, fail to be tested adequately. Detecting and resolving mismatches between developers expectations and actual use is important in improving the fit between design and use. Once mismatches are detected, they may be resolved in one of two ways. Developers may adjust their expectations to better match actual use, thus refining system requirements and eventually making the system more usable and/or useful. For instance, features that were expected to be

4 used rarely, but are used often in practice can be made easier to access and more efficient. Alternatively, users can learn about developers expectations, thus learning how to use the existing system more effectively. For instance, learning that they are not expected to type full URLs in Netscape Navigator TM can lead users to omit and.com in commercial URLs such as Thus, it is important to identify, and make explicit, usage expectations that importantly affect, or are embodied in, application designs. This can help developers think more clearly about the implications of design decisions, and may, in itself, promote improved design. Usage data collection techniques may then be directed at capturing information that is helpful in detecting mismatches between expected and actual use, and mismatches may be used as opportunities to adjust the design based on usage-related information, or to adjust usage based on design-related information. 4.2 Technical Overview The proposed approach involves a development platform for creating software agents that are deployed over the Internet to observe application use and report usage data and user feedback to developers. To this end, the following process is employed: () developers design applications and identify usage expectations; (2) developers create agents to monitor application use and capture usage data and user feedback; (3) agents are deployed over the Internet independently of the application to run on users' computers; (4) agents perform in-context data abstraction, selection, and reduction as needed to allow actual use to be compared against expected use; and (5) agents report data back to developers to inform further evolution of expectations, the application, and agents. The fundamental strategy underlying this work is to exploit already existing information produced by user interface and application components to support usage data collection. To this end, an event service providing generic event and state monitoring capabilities was implemented, on top of which an agent service providing generic data abstraction, selection, and reduction services was implemented. Because the means for accessing event and state information varies depending on the components used to develop applications, we introduced the notion of a default model to mediate between monitored components and the event service. Furthermore, because numerous agents were observed to be useful across multiple applications, we introduced the notion of default agents to allow higherlevel generic data collection services to be reused across applications. See Figure 2. 8VHUÃ$JHQWV 'HIDXOWÃ$JHQWV $JHQWÃ6HUYLFH (YHQWÃ6HUYLFH 'HIDXOWÃ0RGHO $SSOLFDWLRQ $SSOLFDWLRQVSHFLILFÃGDWDÃFROOHFWLRQÃFRGH HJÃPHQXVÃWRROEDUVÃFRPPDQGVÃGLDORJVÃHWF 5HXVDEOHÃGDWDÃFROOHFWLRQÃFRGH HJóXVH ÃDQGóYDOXHÃSURYLGHG ÃHYHQWV *HQHULFÃGDWDÃDEVWUDFWLRQÃVHOHFWLRQÃUHGXFWLRQà FRQWH[WFDSWXUHÃVHUYLFHVÃDQGÃDXWKRULQJÃ*8, *HQHULFÃHYHQWÃDQGÃVWDWHÃPRQLWRULQJÃVHUYLFHV &RPSRQHQWÃQDPLQJÃUHODWLRQVKLSÃDQGÃHYHQW PRQLWRULQJÃVLPSOLILHGÃDFFHVVÃWRÃVWDWH &RPSRQHQWVÃUHODWLRQVKLSVÃHYHQWVÃVWDWH Figure 2: The layered relationship between the application, default model, event service, agent service, default agents, and user-defined agents. Shading indicates the degree to which each aspect is believed to be generic and reusable. Note that the proposed approach is different from traditional event monitoring approaches in that data abstraction, selection, and reduction is performed during data collection (by software agents) as opposed to after data collection (by human investigators). This allows abstraction, selection, and reduction to be performed incontext, resulting in improved data quality and reduced data reporting and post-hoc analysis needs. For more technical details see (Hilbert, 999). The current prototype works with Java applications and requires developers to insert two lines of code into their application: one to start data collection and one to stop data collection and report results. Once this has been done, developers use an agent authoring user interface to define agents without writing code (Figure 4). Once agents have been defined, they are serialized and stored in an ASCII file with a URL on a development computer. The URL is passed as a command-line argument to the application of interest. When the application is run, the URL is automatically downloaded and the latest agents instantiated on the user s computer. Agent reports are sent to development computers via upon application exit. For more technical details see (Hilbert, 999). 4.3 Usage Scenario To see how these services may be used in practice, consider the following scenario developed by Lockheed Martin C2 Integration Systems as part of a governmentsponsored research demonstration. A group of engineers is tasked with designing a webbased user interface to allow users to request information regarding Department of Defense cargo in transit. After involving users in design, constructing use cases, performing task analyses, doing cognitive walkthroughs, and employing other user-centered design methods, a prototype interface is ready for deployment (Figure 3). The engineers in this scenario were particularly interested in verifying the expectation that users would not frequently change the mode of travel selection in the

5 Figure 3: A prototype user interface for tracking Department of Defense cargo in transit. (b) (a) Figure 5: Agent notification (a) and user feedback (b). Use of these data collection features is optional. first section of the form (e.g. Air, Ocean, Motor, Rail, or Any ) after having made subsequent selections, since the mode of travel selection affects the choices available in subsequent sections. Expecting that this would not be a common problem, the engineers decided to reset all selections to their default values whenever the mode of travel selection is reselected. In Figure 4, the developer has defined an agent that fires whenever the user selects one of the controls in the Figure 4: Agent authoring interface showing events (top left), components (top middle), global variables (top right), agents (bottom left) and agent properties (bottom right). mode of travel section of the interface and then selects controls outside that section. This agent is then used in conjunction with other agents to detect when the user has changed the mode of travel after having made subsequent selections. In addition to capturing data unobtrusively, the engineers decided to configure an agent to notify users (by posting a message) when it detected behavior in violation of developers expectations (Figure 5a). By selecting an agent message and pressing the Feedback button users could learn more about the violated expectation and respond with feedback if desired (Figure 5b). Feedback was reported along with general usage data via each time the application was exited. Agent-collected data was later reviewed by support engineers who provided a new release based on the data collected in the field. It is tempting to think that this example has a clear design flaw that, if corrected, would obviate the need for data collection. Namely, one might argue, the application should detect which selections must be reselected and only require users to reselect those values. To illustrate how this objection misses the mark, the Lockheed personnel deliberately fashioned the scenario to include a user responding to the agent with exactly this suggestion (Figure 5b). After reviewing the agent-collected feedback, the engineers consider the suggestion, but unsure of whether to implement it (due to its impact on the current design, implementation, and test plans), decide to review the usage data log. The log, which documents over a month of use with over 00 users, indicates that this

6 problem has only occurred twice, and both times with the same user. As a result, the developers decide to put the change request on hold. The ability to base design and effort allocation decisions on this type of empirical data is one of the key contributions of this approach. 5 Discussion 5. Lab Experience We (and Lockheed personnel) have authored data collection agents for a number of example applications including the database query interface described in the usage scenario (5 agents), an interface for provisioning phone service accounts (2 agents), and a word processing application (53 agents). Figure 6 illustrates the impact of abstraction, selection, and reduction on the number of bytes of data generated by the word processing application (plotted on a log scale) over time. The first point in each series indicates the number of bytes generated by the approach when applied to a simple word processing session in which a user opens a file, performs a number of menu and toolbar operations, edits text, and saves and closes the file. The subsequent four points in each series indicate the amount of data generated assuming the user performs the same basic actions four times over. Thus, this graph represents an approximation of data growth over time based on the assumption that longer sessions primarily consist of repetitions of the same high-level actions performed in shorter sessions. The raw data series indicates the number of bytes of data generated if all window system events are captured including all mouse movements and key presses. The abstracted data series indicates the amount of data generated if only abstract events corresponding to proactive manipulation of user interface components are captured (about 4% of the size of raw data). The selected data series indicates the amount of data generated if only selected abstract events and state values regarding menu, toolbar, and dialog use are captured (about % of the size of raw data). Finally, the reduced data series indicates the amount of data generated if abstracted and selected data is reduced to simple counts of unique observed events (and event sequences) and state values (and state value vectors) prior to reporting (less than % of the size of raw data). There is little to no growth of reduced data over time because data size increases only when unique events or state values are observed for the first time. An uncompressed ASCII agent definition file containing default agents and 53 user-defined agents for the word processor example is less than 70K bytes. The entire data collection infrastructure is less than 500K bytes. Bytes of data Time Raw Abstracted Selected Reduced Figure 6: Impact of abstraction, selection, and reduction on bytes of data generated (plotted on a log scale) over time. 5.2 Non-Lab Experience We have also attempted to evaluate and refine our ideas and prototypes by engaging in a number of evaluative activities outside of the research lab. While these activities have been informal, we believe they have been practical and provide meaningful insights given our hypotheses and domain of interest. Namely, these experiences have all contributed evidence to support the hypothesis that automated software monitoring can indeed be used to capture data to inform design, impact assessment, and effort allocation decisions. NYNEX Corporation: The Bridget System The Bridget system is a form-based phone service provisioning system developed in cooperation between the Intelligent Interfaces Group at NYNEX Corporation and the Human-Computer Communication Group at the University of Colorado at Boulder (Girgensohn et al., 994). The Bridget development process was participatory and iterative in nature. In each iteration, users were asked to perform tasks with a prototype while developers observed and noted discrepancies between expected and actual use. Developers then discussed observed mismatches with users after each task. Users also interacted with Bridget on their own and voluntarily reported feedback to developers. There were two major results of this experience. First, the design process outlined above led to successful design improvements that might not have been introduced otherwise. Second, we identified a number of areas in which automated support might improve the process. Motivated by this experience, the second author of this paper helped develop a prototype agent-based system for observing usage on developers behalf and initiating remote communication between developers and users when mismatches between expected and actual use were observed. This prototype served as the basis for the work described in this paper. However, the idea of capturing generic usage data in addition to detecting specific mismatches was not addressed. The significance of this oversight is highlighted below.

7 Lockheed Martin Corporation: The GTN Scenario Based on the NYNEX experience, and with the goal of making the approach more scalable, the authors developed a second prototype (described in this paper) at the University of California at Irvine. Independent developers at Lockheed Martin Corporation then integrated this second prototype into a logistics information system as part of a government-sponsored demonstration scenario (described in the usage scenario). There were two major results of this experience. First, the experience suggested that independent developers could successfully apply the approach with only moderate effort and that significant data could nonetheless be captured. Second, the data that was collected could be used to support impact assessment and effort allocation decisions in addition to design decisions (as illustrated in the usage scenario). This outcome had not been anticipated since, up to this point, our research had focused on capturing design-related information. Microsoft Corporation: The Instrumented Version Finally, in order to better understand the challenges faced by organizations attempting to capture usage data on a large scale in practice, the first author of this paper managed an instrumentation effort at Microsoft Corporation. The effort involved capturing basic usage data regarding the behavior of 500 to 000 volunteer users of an instrumented version of a well-known Microsoft product over a two-month period. Because this was not the first time data would be collected regarding the use of this product, infrastructure already existed to capture data. The infrastructure consisted of instrumentation code inserted directly into application code that captured data of interest and wrote it to binary files. Users then copied these files to floppy disks and mailed them to Microsoft after a pre-specified period of use. Unfortunately, due to: (a) the sheer amount of instrumentation code already embedded in the application, (b) the limited time available for updating instrumentation to capture data regarding new features, and (c) the requirement to compare the latest data against prior data collection results, we were unable to reimplement the data collection infrastructure based upon this research. Thus, the existing infrastructure was used allowing us to observe, first-hand, the difficulties and limitations inherent in such an approach. The results of this experience were instructive in a number of ways. First and foremost, it further supported the hypothesis that automated software monitoring can indeed be used to inform design, impact assessment, and Due to a non-disclosure agreement, we cannot name the product nor discuss how it was improved based on usage data. However, we can describe the data collection approach employed by Microsoft. effort allocation decisions. Furthermore, it was an existence proof that there are in fact situations in which the benefits of data collection are perceived to outweigh the maintenance and analysis costs, even in an extremely competitive development organization in which time-tomarket is of utmost importance. Lessons learned follow. How Practice can be Informed by this Research The Microsoft experience has further validated our emphasis on the abstraction, selection, reduction, context, and evolution problems by illustrating the negative results of failing to adequately address these problems in practice. First, because the approach relies on intrusive instrumentation of application code, evolution is a critical problem. In order to modify data collection in any way for instance, to adjust what data is collected (i.e., selection) the application itself must be modified, impacting the build and test processes. As a result, development and maintenance of instrumentation is costly resulting in studies only being conducted irregularly. Furthermore, there is no mechanism for flexibly mapping between lower level events and higher level events of interest (i.e., abstraction). As a result, abstraction must be performed as part of the post-hoc analysis process resulting in failures to notice errors in data collection that affect abstraction (such as missing context) until after data has been collected. Also, because data is not reduced prior to reporting, a large amount of data is reported, post-hoc analysis is unnecessarily complicated, and most data is never used in analysis (particularly sequential aspects). Finally, the approach does not allow users to provide feedback to augment automatically captured data. How Practice has Informed this Research Despite these observed limitations, this experience also resulted in a number of insights that have informed and refined this research. The ease with which we incorporated these insights into the proposed approach (and associated methodological considerations) further increases our confidence in the flexibility and generality of the approach. Most importantly, the experience helped motivate a shift from micro expectations regarding the behavior of single users within single sessions to macro expectations regarding the behavior of multiple users over multiple sessions. In the beginning, we focused on expectations of the first kind. However, this sort of analysis, by itself, is challenging due to difficulties in inferring user intent and in anticipating all the important areas in which mismatches might occur. Furthermore, once mismatches are identified, whether or not developers should take action and adjust the design is not clear in the absence of more general data regarding how the application is used on a large scale.

8 For instance, how should developers react to the fact that the print current page option in the print dialog was used 0,000 times? The number of occurrences of any event must be compared against the number of times the event might have occurred. This is the denominator problem. 0,000 uses of the print current page option out of,000 uses of the print dialog paints a different picture from 0,000 uses of the option out of,000,000 uses of the dialog. The first scenario suggests the option might be made default while the second does not. A related issue is the need for more general data against which to compare specific data collection results. This is the baseline problem. For instance, if there are design issues associated with features that are much more frequently used than printing, then perhaps those issues should take precedence over changes to the print dialog. Thus, generic usage information should be captured to provide developers with a better sense of the big picture of how applications are used. 6 Conclusions We have presented a theory to motivate and guide usage data collection, an architecture capable of supporting larger scale collection (than currently possible in usability tests) of higher quality data (than currently possible in beta tests), and real-word experience suggesting the proposed approach is complementary to existing usability practice. While our initial intent was to support usability evaluations directly, our experience suggests that automated techniques for capturing usage information are better suited to capturing indicators of the big picture of how applications are used than in identifying subtle, nuanced, and unexpected usability issues. However, these strengths and weaknesses nicely complement the strengths and weaknesses inherent in current usability testing practice, in which subtle usability problems may be identified through careful human observation, but in which there is little sense of the big picture of how applications are used on a large scale. It was reported to us by one Microsoft usability professional that the usability team is often approached by design and development team members with questions such as how often do users do X? or how often does Y happen?. This is obviously useful information for developers wishing to assess the impact of suspected problems or to focus effort for the next release. However, it is not information that can be reliably collected in the usability lab. We are in the process of generalizing our approach to capture data regarding arbitrary software systems implemented in a component- and event-based architectural style (e.g., JavaBeans TM ) and are seeking further evaluation opportunities. References Badre, A.N. & Santos, P.J. (99). A knowledge-based system for capturing human-computer interaction events: CHIME. Tech Report GIT-GVU-9-2. Badre, A.N., Guzdial, M., Hudson, S.E., & Santos, P.J. (995). A user interface evaluation environment using synchronized video, visualizations, and event trace data. Journal of Software Quality, Vol. 4. Castillo, J.C. & Hartson, H.R. (997). Remote usability evaluation site. Chen, J. (990). Providing intrinsic support for user interface monitoring. INTERACT 90. Cook, R., Kay, J., Ryan, G., & Thomas, R.C. (995). A toolkit for appraising the long-term usability of a text editor. Software Quality Journal, Vol. 4, No. 2. Cusumano, M.A. & Selby, R.W. (995). Microsoft Secrets. The Free Press, New York, NY. Ergolight Usability Software (998). Product web pages. Girgensohn, A., Redmiles, D.F., & Shipman, F.M. III. (994). Agent-based support for communication between developers and users in software design. KBSE 94. Hartson, H.R., Castillo, J.C., Kelso, J., & Neale, W.C. (996). Remote evaluation: the network as an extension of the usability laboratory. CHI 96. Hilbert, D.M. & Redmiles, D.F. (2000). Extracting usability information from user interface events. ACM Computing Surveys (To Appear). Hilbert, D.M. (999). Large-scale collection of application usage data and user feedback to inform interactive software development. Doctoral Dissertation. Technical Report UCI-ICS Hoiem, D.E. & Sullivan, K.D. (994). Designing and using integrated data collection and analysis tools: challenges and considerations. Nielsen, J. (Ed.). Usability Laboratories Special Issue of Behaviour and Information Technology, Vol. 3, No. & 2. Kay, J. & Thomas, R.C. (995). Studying long-term system use. Communications of the ACM, Vol. 38, No. 7. Lecerof, A. & Paterno, F. (998). Automatic support for usability evaluation. IEEE Transactions on Software Engineering, Vol. 24, No. 0. Smilowitz, E.D., Darnell, M.J., & Benson, A.E. (994). Are we overlooking some usability testing methods? A comparison of lab, beta, and forum tests. Nielsen, J. (Ed.). Usability Laboratories Special Issue of Behaviour and Information Technology, Vol. 3, No. & 2. Weiler, P. (993). Software for the usability lab: a sampling of current tools. INTERCHI 93.

Collection of User Interaction Expectations in the Development Process

Collection of User Interaction Expectations in the Development Process Agents for Collecting Application Usage Data Over the Internet David M. Hilbert David F. Redmiles Department of Information and Computer Science University of California, Irvine Irvine, CA 92697-3425 USA

More information

Agents for Collecting Application Usage Data Over the Internet

Agents for Collecting Application Usage Data Over the Internet Agents for Collecting Application Usage Data Over the Internet David M. Hilbert David F. Redmiles Department of Information and Computer Science University of California, Irvine Irvine, CA 92697-3425 USA

More information

Agent-Based Support for Communication between Developers and Users in Software Design

Agent-Based Support for Communication between Developers and Users in Software Design Agent-Based Support for Communication between Developers and Users in Software Design Andreas Girgensohn NYNEX Science and Technology White Plains, NY E-mail: andreasg@nynexst.com David F. Redmiles, Frank

More information

Web Usability Probe: A Tool for Supporting Remote Usability Evaluation of Web Sites

Web Usability Probe: A Tool for Supporting Remote Usability Evaluation of Web Sites Web Usability Probe: A Tool for Supporting Remote Usability Evaluation of Web Sites Tonio Carta 1, Fabio Paternò 1, and Vagner Figuerêdo de Santana 1,2 1 CNR-ISTI, HIIS Laboratory, Via Moruzzi 1, 56124

More information

Designing and Evaluating a Web-Based Collaboration Application: A Case Study

Designing and Evaluating a Web-Based Collaboration Application: A Case Study Designing and Evaluating a Web-Based Collaboration Application: A Case Study Wenli Zhu Microsoft Corporation, One Microsoft Way, Redmond, WA 98052 USA ABSTRACT The Web has evolved from a simple browsing

More information

A Business Process Services Portal

A Business Process Services Portal A Business Process Services Portal IBM Research Report RZ 3782 Cédric Favre 1, Zohar Feldman 3, Beat Gfeller 1, Thomas Gschwind 1, Jana Koehler 1, Jochen M. Küster 1, Oleksandr Maistrenko 1, Alexandru

More information

EVALUATING THE USABILITY OF ERP SYSTEMS: WHAT CAN CRITICAL INCIDENTS TELL US?

EVALUATING THE USABILITY OF ERP SYSTEMS: WHAT CAN CRITICAL INCIDENTS TELL US? EVALUATING THE USABILITY OF ERP SYSTEMS: WHAT CAN CRITICAL INCIDENTS TELL US? Mari-Klara Oja Bentley University MA, United States oja_mari@bentley.edu Wendy Lucas Bentley University MA, United States wlucas@bentley.edu

More information

Intellect Platform - The Workflow Engine Basic HelpDesk Troubleticket System - A102

Intellect Platform - The Workflow Engine Basic HelpDesk Troubleticket System - A102 Intellect Platform - The Workflow Engine Basic HelpDesk Troubleticket System - A102 Interneer, Inc. Updated on 2/22/2012 Created by Erika Keresztyen Fahey 2 Workflow - A102 - Basic HelpDesk Ticketing System

More information

WHITEPAPER. Managing Design Changes in Enterprise SBM Installations

WHITEPAPER. Managing Design Changes in Enterprise SBM Installations WHITEPAPER Managing Design Changes in Enterprise SBM Installations By Tom Clement Serena Software, Inc. October 2013 Summary This document explains how to organize your SBM maintenance and development

More information

Microsoft DFS Replication vs. Peer Software s PeerSync & PeerLock

Microsoft DFS Replication vs. Peer Software s PeerSync & PeerLock Microsoft DFS Replication vs. Peer Software s PeerSync & PeerLock Contents.. Why Replication is Important. 2 The Original Purpose for MS DFSR. 2 Best Scenarios for DFSR. 3 When DFSR is Problematic. 4 The

More information

Enabling Service-Based Application Development through Social Objects

Enabling Service-Based Application Development through Social Objects Enabling Service-Based Application Development through Social Objects Peter D. Schott, Michael J. Burns, and R. Bruce Craig Abstract Software development is typically a social activity; development is

More information

Keywords: Introduction. Laura Downey 1 Vignette Corporation 3410 Far West Blvd., #300 Austin, TX 78731 1-512-425-5495 ldowney@vignette.

Keywords: Introduction. Laura Downey 1 Vignette Corporation 3410 Far West Blvd., #300 Austin, TX 78731 1-512-425-5495 ldowney@vignette. Developing Usability Tools and Techniques for Designing and Testing Web Sites Jean Scholtz and Sharon Laskowski National Institute of Standards and Technology (NIST) Bldg, 225, A216 Gaithersburg, MD 20899

More information

MAS 500 Intelligence Tips and Tricks Booklet Vol. 1

MAS 500 Intelligence Tips and Tricks Booklet Vol. 1 MAS 500 Intelligence Tips and Tricks Booklet Vol. 1 1 Contents Accessing the Sage MAS Intelligence Reports... 3 Copying, Pasting and Renaming Reports... 4 To create a new report from an existing report...

More information

BillQuick Agent 2010 Getting Started Guide

BillQuick Agent 2010 Getting Started Guide Time Billing and Project Management Software Built With Your Industry Knowledge BillQuick Agent 2010 Getting Started Guide BQE Software, Inc. 2601 Airport Drive Suite 380 Torrance CA 90505 Support: (310)

More information

Location, Location, Location: Challenges of Outsourced Usability Evaluation Murphy, John; Howard, Steve; Kjeldskov, Jesper; Goshnick, Steve

Location, Location, Location: Challenges of Outsourced Usability Evaluation Murphy, John; Howard, Steve; Kjeldskov, Jesper; Goshnick, Steve Aalborg Universitet Location, Location, Location: Challenges of Outsourced Usability Evaluation Murphy, John; Howard, Steve; Kjeldskov, Jesper; Goshnick, Steve Published in: Proceedings of the Workshop

More information

Product Line Development - Seite 8/42 Strategy

Product Line Development - Seite 8/42 Strategy Controlling Software Product Line Evolution An infrastructure on top of configuration management Michalis Anastasopoulos michalis.anastasopoulos@iese.fraunhofer.de Outline Foundations Problem Statement

More information

Component Based Software Engineering: A Broad Based Model is Needed

Component Based Software Engineering: A Broad Based Model is Needed Component Based Software Engineering: A Broad Based Model is Needed Allen Parrish (parrish@cs.ua.edu) Brandon Dixon (dixon@cs.ua.edu) David Hale (dhale@alston.cba.ua.edu) Department of Computer Science

More information

Efficient Collection and Analysis of Program Data - Hackystat

Efficient Collection and Analysis of Program Data - Hackystat You can t even ask them to push a button: Toward ubiquitous, developer-centric, empirical software engineering Philip Johnson Department of Information and Computer Sciences University of Hawaii Honolulu,

More information

User Interface Design

User Interface Design User Interface Design Winter term 2005/2006 Thursdays, 14-16 c.t., Raum 228 Prof. Dr. Antonio Krüger Institut für Geoinformatik Universität Münster 20. Februar 06 IfGi Universität Münster User Interface

More information

PROCESS STATE INFERENCE FOR SUPPORT OF KNOWLEDGE INTENSIVE WORK

PROCESS STATE INFERENCE FOR SUPPORT OF KNOWLEDGE INTENSIVE WORK PROCESS STATE INFERENCE FOR SUPPORT OF KNOWLEDGE INTENSIVE WORK John Noll Computer Engineering Department Santa Clara University 500, El Camino Real, Santa Clara, CA-95053, USA. email: jnoll@cse.scu.edu

More information

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,

More information

Round-Trip Software Engineering Using UML: From Architecture to Design and Back

Round-Trip Software Engineering Using UML: From Architecture to Design and Back Round-Trip Software Engineering Using UML: From Architecture to Design and Back Nenad Medvidovic Alexander Egyed David S. Rosenblum Computer Science Department University of Southern California Los Angeles,

More information

FireScope + ServiceNow: CMDB Integration Use Cases

FireScope + ServiceNow: CMDB Integration Use Cases FireScope + ServiceNow: CMDB Integration Use Cases While virtualization, cloud technologies and automation have slashed the time it takes to plan and implement new IT services, enterprises are still struggling

More information

Exploring new ways of Usability testing for an E-Science/ Scientific research application

Exploring new ways of Usability testing for an E-Science/ Scientific research application Exploring new ways of Usability testing for an E-Science/ Scientific research application By Julio Cesar Dovalina St. Edward s University Advisor Dr. Gregory Madey 1 Abstract There is a growing interest

More information

The introduction covers the recent changes is security threats and the effect those changes have on how we protect systems.

The introduction covers the recent changes is security threats and the effect those changes have on how we protect systems. 1 Cyber-attacks frequently take advantage of software weaknesses unintentionally created during development. This presentation discusses some ways that improved acquisition practices can reduce the likelihood

More information

Distance-Learning Remote Laboratories using LabVIEW

Distance-Learning Remote Laboratories using LabVIEW Distance-Learning Remote Laboratories using LabVIEW Introduction Laboratories, which are found in all engineering and science programs, are an essential part of the education experience. Not only do laboratories

More information

Tips and Tricks SAGE ACCPAC INTELLIGENCE

Tips and Tricks SAGE ACCPAC INTELLIGENCE Tips and Tricks SAGE ACCPAC INTELLIGENCE 1 Table of Contents Auto e-mailing reports... 4 Automatically Running Macros... 7 Creating new Macros from Excel... 8 Compact Metadata Functionality... 9 Copying,

More information

Usage Analysis Tools in SharePoint Products and Technologies

Usage Analysis Tools in SharePoint Products and Technologies Usage Analysis Tools in SharePoint Products and Technologies Date published: June 9, 2004 Summary: Usage analysis allows you to track how websites on your server are being used. The Internet Information

More information

LASTLINE WHITEPAPER. Using Passive DNS Analysis to Automatically Detect Malicious Domains

LASTLINE WHITEPAPER. Using Passive DNS Analysis to Automatically Detect Malicious Domains LASTLINE WHITEPAPER Using Passive DNS Analysis to Automatically Detect Malicious Domains Abstract The domain name service (DNS) plays an important role in the operation of the Internet, providing a two-way

More information

Macromedia Dreamweaver 8 Developer Certification Examination Specification

Macromedia Dreamweaver 8 Developer Certification Examination Specification Macromedia Dreamweaver 8 Developer Certification Examination Specification Introduction This is an exam specification for Macromedia Dreamweaver 8 Developer. The skills and knowledge certified by this

More information

Monitoring Replication

Monitoring Replication Monitoring Replication Article 1130112-02 Contents Summary... 3 Monitor Replicator Page... 3 Summary... 3 Status... 3 System Health... 4 Replicator Configuration... 5 Replicator Health... 6 Local Package

More information

Cognitive and Organizational Challenges of Big Data in Cyber Defense

Cognitive and Organizational Challenges of Big Data in Cyber Defense Cognitive and Organizational Challenges of Big Data in Cyber Defense Nathan Bos & John Gersh Johns Hopkins University Applied Laboratory nathan.bos@jhuapl.edu, john.gersh@jhuapl.edu The cognitive and organizational

More information

Chapter 5. Regression Testing of Web-Components

Chapter 5. Regression Testing of Web-Components Chapter 5 Regression Testing of Web-Components With emergence of services and information over the internet and intranet, Web sites have become complex. Web components and their underlying parts are evolving

More information

System Center Configuration Manager 2007

System Center Configuration Manager 2007 System Center Configuration Manager 2007 Software Distribution Guide Friday, 26 February 2010 Version 1.0.0.0 Baseline Prepared by Microsoft Copyright This document and/or software ( this Content ) has

More information

Simplifying development through activity-based change management

Simplifying development through activity-based change management IBM Rational ClearCase and IBM Rational ClearQuest October 2004 Simplifying development through activity-based change management Allan Tate Product Manager IBM Software Group Karen Wade SCM Product Marketing

More information

A Symptom Extraction and Classification Method for Self-Management

A Symptom Extraction and Classification Method for Self-Management LANOMS 2005-4th Latin American Network Operations and Management Symposium 201 A Symptom Extraction and Classification Method for Self-Management Marcelo Perazolo Autonomic Computing Architecture IBM Corporation

More information

An interactive 3D visualization system for displaying fieldmonitoring

An interactive 3D visualization system for displaying fieldmonitoring icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) An interactive 3D visualization system for displaying

More information

Three Fundamental Techniques To Maximize the Value of Your Enterprise Data

Three Fundamental Techniques To Maximize the Value of Your Enterprise Data Three Fundamental Techniques To Maximize the Value of Your Enterprise Data Prepared for Talend by: David Loshin Knowledge Integrity, Inc. October, 2010 2010 Knowledge Integrity, Inc. 1 Introduction Organizations

More information

Introduction to Enterprise Project Management

Introduction to Enterprise Project Management 001_0672329212_Intro.qxd 10/1/07 9:42 AM Page 1 Introduction to Enterprise Project Management Throughout this book, you will see that Enterprise Project Management (EPM; also called Enterprise Portfolio

More information

An Iterative Usability Evaluation Procedure for Interactive Online Courses

An Iterative Usability Evaluation Procedure for Interactive Online Courses An Iterative Usability Evaluation Procedure for Interactive Online Courses by Laurie P. Dringus ABSTRACT The Internet and World Wide Web (W3) have afforded distance learners simple links to access information.

More information

THE FUTURE OF INTERNET-BASED SURVEY METHODS

THE FUTURE OF INTERNET-BASED SURVEY METHODS Chapter Seven CONCLUSIONS In this chapter, we offer some concluding thoughts on the future of Internet-based surveys, the issues surrounding the use of e-mail and the Web for research surveys, and certain

More information

White Paper: Designing Resourceful Graphical User Interfaces (GUIs) for Healthcare Applications

White Paper: Designing Resourceful Graphical User Interfaces (GUIs) for Healthcare Applications Accelerate Development Reduce Time to Product Automate Critical Tasks White Paper: Designing Resourceful Graphical User Interfaces (GUIs) for Healthcare Applications The ASHVINS GROUP, Inc. 6161 Blue Lagoon

More information

Configuring Event Log Monitoring With Sentry-go Quick & Plus! monitors

Configuring Event Log Monitoring With Sentry-go Quick & Plus! monitors Configuring Event Log Monitoring With Sentry-go Quick & Plus! monitors 3Ds (UK) Limited, November, 2013 http://www.sentry-go.com Be Proactive, Not Reactive! Many server-based applications, as well as Windows

More information

THE DEVELOPMENT OF A WEB BASED MULTIMEDIA INFORMATION SYSTEM FOR BUILDING APPRAISAL

THE DEVELOPMENT OF A WEB BASED MULTIMEDIA INFORMATION SYSTEM FOR BUILDING APPRAISAL THE DEVELOPMENT OF A WEB BASED MULTIMEDIA INFORMATION SYSTEM FOR BUILDING APPRAISAL Dominic O' Sullivan Department of Civil & Environmental Engineering National University of Ireland, Cork. Dr. Marcus

More information

A.Team Software (.DMS) Dynamic Meeting Scheduler Vision Document

A.Team Software (.DMS) Dynamic Meeting Scheduler Vision Document A.Team Software (.DMS) Dynamic Meeting Scheduler Vision Document Aaron Turrie - 10451675 - at.nelret@gmail.com Eric Meyer - 10829232 - eric.meyer@utdallas.edu Mario Medina - 2010809959 - mariomedina.se@gmail.com

More information

Visual Interfaces for the Development of Event-based Web Agents in the IRobot System

Visual Interfaces for the Development of Event-based Web Agents in the IRobot System Visual Interfaces for the Development of Event-based Web Agents in the IRobot System Liangyou Chen ACM Member chen_liangyou@yahoo.com Abstract. Timely integration and analysis of information from the World-Wide

More information

An Oracle Best Practice Guide March 2012. Best Practices for Oracle RightNow Cobrowse Cloud Service

An Oracle Best Practice Guide March 2012. Best Practices for Oracle RightNow Cobrowse Cloud Service An Oracle Best Practice Guide March 2012 Best Practices for Oracle RightNow Cobrowse Cloud Service Introduction Using phone or chat channels is a popular way for customer support staff to communicate with

More information

Sona Systems, Ltd. EXPERIMENT MANAGEMENT SYSTEM Master Documentation Set

Sona Systems, Ltd. EXPERIMENT MANAGEMENT SYSTEM Master Documentation Set Sona Systems, Ltd. EXPERIMENT MANAGEMENT SYSTEM Master Documentation Set Version 2.61 Revision A Copyright 2005 Sona Systems, Ltd., All Rights Reserved About This Manual This manual covers usage of the

More information

The Role of Computers in Synchronous Collaborative Design

The Role of Computers in Synchronous Collaborative Design The Role of Computers in Synchronous Collaborative Design Wassim M. Jabi, The University of Michigan Theodore W. Hall, Chinese University of Hong Kong Abstract In this paper we discuss the role of computers

More information

Usability Issues in Web Site Design

Usability Issues in Web Site Design Version 3, April 98 (available from http://www.npl.co.uk/npl/sections/us and in Proceedings of UPA 98) Usability Issues in Web Site Design Nigel Bevan National Physical Laboratory, Usability Services,

More information

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02) Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we

More information

Introduction to Microsoft Access 2003

Introduction to Microsoft Access 2003 Introduction to Microsoft Access 2003 Zhi Liu School of Information Fall/2006 Introduction and Objectives Microsoft Access 2003 is a powerful, yet easy to learn, relational database application for Microsoft

More information

Software Engineering/Courses Description Introduction to Software Engineering Credit Hours: 3 Prerequisite: 0306211(Computer Programming 2).

Software Engineering/Courses Description Introduction to Software Engineering Credit Hours: 3 Prerequisite: 0306211(Computer Programming 2). 0305203 0305280 0305301 0305302 Software Engineering/Courses Description Introduction to Software Engineering Prerequisite: 0306211(Computer Programming 2). This course introduces students to the problems

More information

USABILITY TESTING OF DATA ACCESS TOOLS. John J. Bosley and Frederick G. Conrad. BLS, 2 Massachusetts Ave., N.E., Rm. 4915, Washington, DC 20212

USABILITY TESTING OF DATA ACCESS TOOLS. John J. Bosley and Frederick G. Conrad. BLS, 2 Massachusetts Ave., N.E., Rm. 4915, Washington, DC 20212 USABILITY TESTING OF DATA ACCESS TOOLS John J. Bosley and Frederick G. Conrad BLS, 2 Massachusetts Ave., N.E., Rm. 4915, Washington, DC 20212 ABSTRACT Thanks to the World Wide Web (WWW), the public now

More information

Evaluator s Guide. PC-Duo Enterprise HelpDesk v5.0. Copyright 2006 Vector Networks Ltd and MetaQuest Software Inc. All rights reserved.

Evaluator s Guide. PC-Duo Enterprise HelpDesk v5.0. Copyright 2006 Vector Networks Ltd and MetaQuest Software Inc. All rights reserved. Evaluator s Guide PC-Duo Enterprise HelpDesk v5.0 Copyright 2006 Vector Networks Ltd and MetaQuest Software Inc. All rights reserved. All third-party trademarks are the property of their respective owners.

More information

Difference Between Model-Driven and Traditional Iterative Software Development

Difference Between Model-Driven and Traditional Iterative Software Development Process Implications of Model-Driven Software Development Author: Jorn Bettin Version 1.0 September 2004 Copyright 2003, 2004 SoftMetaWare Ltd. SoftMetaWare is a trademark of SoftMetaWare Ltd. All other

More information

Our mission is to develop and to offer innovative customer interaction.

Our mission is to develop and to offer innovative customer interaction. www.nixxis.com Copyright 2011 Nixxis Group All rights reserved. Reproduction of this publication in any form without prior written permission is forbidden. Approach Today s business world is facing two

More information

A very short history of networking

A very short history of networking A New vision for network architecture David Clark M.I.T. Laboratory for Computer Science September, 2002 V3.0 Abstract This is a proposal for a long-term program in network research, consistent with the

More information

A Mind Map Based Framework for Automated Software Log File Analysis

A Mind Map Based Framework for Automated Software Log File Analysis 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore A Mind Map Based Framework for Automated Software Log File Analysis Dileepa Jayathilake

More information

SOA REFERENCE ARCHITECTURE: WEB TIER

SOA REFERENCE ARCHITECTURE: WEB TIER SOA REFERENCE ARCHITECTURE: WEB TIER SOA Blueprint A structured blog by Yogish Pai Web Application Tier The primary requirement for this tier is that all the business systems and solutions be accessible

More information

Microsoft Outlook Quick Reference Sheet

Microsoft Outlook Quick Reference Sheet Microsoft Outlook is an incredibly powerful e-mail and personal information management application. Its features and capabilities are extensive. Refer to this handout whenever you require quick reminders

More information

Redpaper Axel Buecker Kenny Chow Jenny Wong

Redpaper Axel Buecker Kenny Chow Jenny Wong Redpaper Axel Buecker Kenny Chow Jenny Wong A Guide to Authentication Services in IBM Security Access Manager for Enterprise Single Sign-On Introduction IBM Security Access Manager for Enterprise Single

More information

The Migration of Microsoft Excel Tools to Next Generation Platforms:

The Migration of Microsoft Excel Tools to Next Generation Platforms: The Migration of Microsoft Excel Tools to Next Generation Platforms: Can You Hear the Footsteps? 2015 ICEAA Professional Development & Training Workshop June 9 12, 2015 Abstract Analysis tools are popular

More information

Paper 10-27 Designing Web Applications: Lessons from SAS User Interface Analysts Todd Barlow, SAS Institute Inc., Cary, NC

Paper 10-27 Designing Web Applications: Lessons from SAS User Interface Analysts Todd Barlow, SAS Institute Inc., Cary, NC Paper 10-27 Designing Web Applications: Lessons from SAS User Interface Analysts Todd Barlow, SAS Institute Inc., Cary, NC ABSTRACT Web application user interfaces combine aspects of non-web GUI design

More information

Lightweight Service-Based Software Architecture

Lightweight Service-Based Software Architecture Lightweight Service-Based Software Architecture Mikko Polojärvi and Jukka Riekki Intelligent Systems Group and Infotech Oulu University of Oulu, Oulu, Finland {mikko.polojarvi,jukka.riekki}@ee.oulu.fi

More information

Basic Unified Process: A Process for Small and Agile Projects

Basic Unified Process: A Process for Small and Agile Projects Basic Unified Process: A Process for Small and Agile Projects Ricardo Balduino - Rational Unified Process Content Developer, IBM Introduction Small projects have different process needs than larger projects.

More information

NETWRIX EVENT LOG MANAGER

NETWRIX EVENT LOG MANAGER NETWRIX EVENT LOG MANAGER QUICK-START GUIDE FOR THE ENTERPRISE EDITION Product Version: 4.0 July/2012. Legal Notice The information in this publication is furnished for information use only, and does not

More information

Data Mining in Web Search Engine Optimization and User Assisted Rank Results

Data Mining in Web Search Engine Optimization and User Assisted Rank Results Data Mining in Web Search Engine Optimization and User Assisted Rank Results Minky Jindal Institute of Technology and Management Gurgaon 122017, Haryana, India Nisha kharb Institute of Technology and Management

More information

Wrangling Actionable Insights from Organizational Data

Wrangling Actionable Insights from Organizational Data Wrangling Actionable Insights from Organizational Data Koverse Eases Big Data Analytics for Those with Strong Security Requirements The amount of data created and stored by organizations around the world

More information

Studying Code Development for High Performance Computing: The HPCS Program

Studying Code Development for High Performance Computing: The HPCS Program Studying Code Development for High Performance Computing: The HPCS Program Jeff Carver 1, Sima Asgari 1, Victor Basili 1,2, Lorin Hochstein 1, Jeffrey K. Hollingsworth 1, Forrest Shull 2, Marv Zelkowitz

More information

Applying the Factory Model to Bulk Cloud Migration

Applying the Factory Model to Bulk Cloud Migration WHITE PAPER Applying the Factory Model to Bulk Cloud Migration By David Pallmann, GM Custom Application Development, Neudesic This white paper discusses moving applications to the public cloud in bulk,

More information

www.novell.com/documentation Advanced User Guide Vibe 4.0 March 2015

www.novell.com/documentation Advanced User Guide Vibe 4.0 March 2015 www.novell.com/documentation Advanced User Guide Vibe 4.0 March 2015 Legal Notices Novell, Inc., makes no representations or warranties with respect to the contents or use of this documentation, and specifically

More information

Business Insight Report Authoring Getting Started Guide

Business Insight Report Authoring Getting Started Guide Business Insight Report Authoring Getting Started Guide Version: 6.6 Written by: Product Documentation, R&D Date: February 2011 ImageNow and CaptureNow are registered trademarks of Perceptive Software,

More information

Eliminate Memory Errors and Improve Program Stability

Eliminate Memory Errors and Improve Program Stability Eliminate Memory Errors and Improve Program Stability with Intel Parallel Studio XE Can running one simple tool make a difference? Yes, in many cases. You can find errors that cause complex, intermittent

More information

The Practical Organization of Automated Software Testing

The Practical Organization of Automated Software Testing The Practical Organization of Automated Software Testing Author: Herbert M. Isenberg Ph.D. Quality Assurance Architect Oacis Healthcare Systems PO Box 3178 Sausalito, CA. 94966 Type: Experience Report

More information

IBM RATIONAL PERFORMANCE TESTER

IBM RATIONAL PERFORMANCE TESTER IBM RATIONAL PERFORMANCE TESTER Today, a major portion of newly developed enterprise applications is based on Internet connectivity of a geographically distributed work force that all need on-line access

More information

QAD Enterprise Applications. Training Guide Demand Management 6.1 Technical Training

QAD Enterprise Applications. Training Guide Demand Management 6.1 Technical Training QAD Enterprise Applications Training Guide Demand Management 6.1 Technical Training 70-3248-6.1 QAD Enterprise Applications February 2012 This document contains proprietary information that is protected

More information

How Programmers Use Internet Resources to Aid Programming

How Programmers Use Internet Resources to Aid Programming How Programmers Use Internet Resources to Aid Programming Jeffrey Stylos Brad A. Myers Computer Science Department and Human-Computer Interaction Institute Carnegie Mellon University 5000 Forbes Ave Pittsburgh,

More information

An Oracle White Paper September 2011. Oracle Team Productivity Center

An Oracle White Paper September 2011. Oracle Team Productivity Center Oracle Team Productivity Center Overview An Oracle White Paper September 2011 Oracle Team Productivity Center Overview Oracle Team Productivity Center Overview Introduction... 1 Installation... 2 Architecture...

More information

User-Centric Client Management with System Center 2012 Configuration Manager in Microsoft IT

User-Centric Client Management with System Center 2012 Configuration Manager in Microsoft IT Situation Microsoft IT needed to evolve their Configuration Manager 2007-based environment that used homegrown application distribution services to meet the self-service needs of Microsoft personnel. Solution

More information

Configuration Management Models in Commercial Environments

Configuration Management Models in Commercial Environments Technical Report CMU/SEI-91-TR-7 ESD-9-TR-7 Configuration Management Models in Commercial Environments Peter H. Feiler March 1991 Technical Report CMU/SEI-91-TR-7 ESD-91-TR-7 March 1991 Configuration Management

More information

Development Methodologies Compared

Development Methodologies Compared N CYCLES software solutions Development Methodologies Compared Why different projects require different development methodologies. December 2002 Dan Marks 65 Germantown Court 1616 West Gate Circle Suite

More information

Applying a User-Centered Design Approach to Data Management: Paper and Computer Testing. Jeffrey R. Wilson, Ph.D. Pearson.

Applying a User-Centered Design Approach to Data Management: Paper and Computer Testing. Jeffrey R. Wilson, Ph.D. Pearson. Applying a User-Centered Design Approach to Data Management: Paper and Computer Testing Jeffrey R. Wilson, Ph.D. Pearson March 2008 Paper presented at the annual conference of the American Educational

More information

Paper Prototyping as a core tool in the design of mobile phone user experiences

Paper Prototyping as a core tool in the design of mobile phone user experiences Paper Prototyping as a core tool in the design of mobile phone user experiences Introduction There is much competition in the mobile telecoms industry. Mobile devices are increasingly featurerich they

More information

SONA SYSTEMS RESEARCHER DOCUMENTATION

SONA SYSTEMS RESEARCHER DOCUMENTATION SONA SYSTEMS RESEARCHER DOCUMENTATION Introduction Sona Systems is used for the scheduling and management of research participants and the studies they participate in. Participants, researchers, principal

More information

What is a life cycle model?

What is a life cycle model? What is a life cycle model? Framework under which a software product is going to be developed. Defines the phases that the product under development will go through. Identifies activities involved in each

More information

JRefleX: Towards Supporting Small Student Software Teams

JRefleX: Towards Supporting Small Student Software Teams JRefleX: Towards Supporting Small Student Software Teams Kenny Wong, Warren Blanchet, Ying Liu, Curtis Schofield, Eleni Stroulia, Zhenchang Xing Department of Computing Science University of Alberta {kenw,blanchet,yingl,schofiel,stroulia,xing}@cs.ualberta.ca

More information

GETTING REAL ABOUT SECURITY MANAGEMENT AND "BIG DATA"

GETTING REAL ABOUT SECURITY MANAGEMENT AND BIG DATA GETTING REAL ABOUT SECURITY MANAGEMENT AND "BIG DATA" A Roadmap for "Big Data" in Security Analytics ESSENTIALS This paper examines: Escalating complexity of the security management environment, from threats

More information

Exclaimer Signature Manager 2.0 User Manual

Exclaimer Signature Manager 2.0 User Manual Exclaimer Exclaimer UK +44 (0) 1252 531 422 USA 1-888-450-9631 info@exclaimer.com Contents GETTING STARTED... 10 Signature Manager Overview... 11 How Does it Work?... 11 But That's Not All...... 12 And

More information

MWSUG 2011 - Paper S111

MWSUG 2011 - Paper S111 MWSUG 2011 - Paper S111 Dealing with Duplicates in Your Data Joshua M. Horstman, First Phase Consulting, Inc., Indianapolis IN Roger D. Muller, First Phase Consulting, Inc., Carmel IN Abstract As SAS programmers,

More information

TeamCompanion Solution Overview. Visual Studio

TeamCompanion Solution Overview. Visual Studio TeamCompanion Solution Overview Visual Studio Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example

More information

UX analytics to make a good app awesome

UX analytics to make a good app awesome UX analytics to make a good app awesome White Paper Copyright 2014 UXprobe bvba Table of contents Executive summary.... 3 1. Building a good app is a challenge... 4 2. How is UX of apps measured today?....

More information

One of the fundamental kinds of Web sites that SharePoint 2010 allows

One of the fundamental kinds of Web sites that SharePoint 2010 allows Chapter 1 Getting to Know Your Team Site In This Chapter Requesting a new team site and opening it in the browser Participating in a team site Changing your team site s home page One of the fundamental

More information

Video Conferencing Task Query Blair Willems Jack Grigg

Video Conferencing Task Query Blair Willems Jack Grigg Video Conferencing Task Query Blair Willems Jack Grigg Supervisors: Dr. Mary Allan and Professor David Thorns University of Canterbury 1. Introduction This report documents the development of a scenario

More information

Strategic Management System for Academic World

Strategic Management System for Academic World 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Strategic Management System for Academic World Expert System Based on Composition

More information

Microsoft Office 2010: Access 2010, Excel 2010, Lync 2010 learning assets

Microsoft Office 2010: Access 2010, Excel 2010, Lync 2010 learning assets Microsoft Office 2010: Access 2010, Excel 2010, Lync 2010 learning assets Simply type the id# in the search mechanism of ACS Skills Online to access the learning assets outlined below. Titles Microsoft

More information

Citrix Systems, Inc.

Citrix Systems, Inc. Citrix Password Manager Quick Deployment Guide Install and Use Password Manager on Presentation Server in Under Two Hours Citrix Systems, Inc. Notice The information in this publication is subject to change

More information

VMware Performance and Capacity Management Accelerator Service

VMware Performance and Capacity Management Accelerator Service AT A GLANCE The VMware Performance and Capacity Management Accelerator Service rapidly deploys a performance management, capacity optimization, and log management solution focused on a limited predefined

More information

Project Knowledge Management Based on Social Networks

Project Knowledge Management Based on Social Networks DOI: 10.7763/IPEDR. 2014. V70. 10 Project Knowledge Management Based on Social Networks Panos Fitsilis 1+, Vassilis Gerogiannis 1, and Leonidas Anthopoulos 1 1 Business Administration Dep., Technological

More information

User Guide Package Exception Management

User Guide Package Exception Management User Guide Package Exception Management 70-3262-4.6 PRECISION Applications 2012 September 2012 2012 Precision Software, a division of QAD Inc. Precision Software products are copyrighted and all rights

More information